5–6 February 2026 | HKPC Building, Kowloon

Beyond EdTech Tools:
Building Systematic AI Capability for Educators

AIBoK is attending HKPC's AI in Education Forum to understand how Hong Kong schools are navigating AI adoption — and whether our capability-first training approach translates from enterprise to education.

#CapabilityOverHype

What We're Investigating at HKPC Forum 2026

60+ EdTech solutions. 22 QEF-funded projects. 20+ seminars on AI integration.

The HKPC AI in Education Forum represents Hong Kong's most comprehensive showcase of AI tools for schools. But here's what we're watching for:

  • Are schools getting tool training or capability building?
  • Do teachers need platform-specific skills or platform-agnostic AI literacy?
  • What happens when the next AI tool launches — do schools retrain from scratch?

AIBoK's approach in enterprise settings is diagnostic-led, vendor-neutral capability development. We're attending HKPC Forum to test whether this model translates to education — or whether schools need something fundamentally different.

Join Our Mailing List for Post-Event Findings

From Our Enterprise Work: What We've Learned About AI Adoption

In enterprise settings, we see three patterns:

  1. Tool-Obsessed Organisations → Endless pilots, no productivity gains, change fatigue
  2. Skills-Obsessed Organisations → Generic "prompt engineering" courses, no context application
  3. Capability-Obsessed Organisations → Diagnostic-led training, role-specific use cases, measurable ROI

Early Hypothesis: Education Faces the Same Challenge

Teachers are being offered tool-specific training (ChatGPT, Gemini, Copilot) without systematic frameworks for:

  • When to use AI vs traditional methods
  • How to evaluate AI outputs for accuracy/bias
  • Why certain tasks benefit from AI vs human judgment

We're at HKPC Forum to validate (or invalidate) this hypothesis.

Platform-Agnostic AI Capability Development (What That Means for Schools)

For Enterprises For Education
Diagnostic-led training (GAI-TNA + GAI-MM) Pre-training needs assessment for schools
Role-specific modules (IT, knowledge workers, executives) Educator-specific, administrator-specific, student-facing modules
Vendor-neutral approach (works with ANY AI tool) Teach AI literacy, not tool mastery
Measurable productivity outcomes Teacher time savings, student engagement metrics, learning outcome improvements

Our enterprise methodology:

  1. Diagnose capability gaps (not just "what tools do you have?")
  2. Design role-specific training (not generic AI 101)
  3. Deliver practical, use-case-driven workshops (not theory lectures)
  4. Measure productivity/ROI (not just "satisfaction scores")

Can this work for schools? That's what we're testing.

Join Education Pilot Waitlist

Where to Find Us at HKPC Forum

Si Pham (AIBoK Co-Founder, ASEAN Lead) is attending:

Thursday 5 Feb, 14:20–14:40
Talent Selection and AI Education @ HKAGE
Theatre 1
Investigating: How gifted student programmes approach AI literacy and self-directed learning

💡 What We Bring to This Discussion: Diagnostic-Led Capability Development

HKAGE focuses on talent identification and nurturing gifted students through self-directed learning. AIBoK's diagnostic frameworks (Training Needs Assessment + Maturity Models) could translate to education contexts:

  • Capability Assessment ≠ IQ Testing: Our TNA approach measures practical AI capability (what can you do?) vs abstract intelligence (how smart are you?)
  • Self-Directed Learning Requires Scaffolding: Gifted students need frameworks for autonomous AI exploration without falling into common traps (hallucination acceptance, over-reliance, blind trust)
  • Progression Models: Our maturity model (Levels 1-5: Awareness → Practitioner → Architect) maps to student advancement through AI literacy stages
  • Measurement Matters: How do you assess whether AI-enhanced learning produces better outcomes? Our diagnostic checkpoints could inform education metrics

Question for HKAGE: Do gifted education programmes need different AI capability frameworks, or do they need the same frameworks applied faster/deeper?

Thursday 5 Feb, 16:15–16:45
QEF eLAFP Briefing — Background, Eligibility, Demo, and Notes
Theatre 1
Investigating: Government funding mechanisms for AI adoption in schools and assessment criteria for projects

💡 What We Bring to This Discussion: ROI-Driven Training Design

QEF eLAFP funds 22 AI education projects — but how do schools assess impact? AIBoK's approach to training ROI measurement could inform funding assessment:

  • Pre/Post Capability Assessment: Our diagnostic baselines enable measurable progress tracking (not just satisfaction surveys)
  • Productivity Metrics: Time saved, error reduction, workflow improvements — translates to teacher workload reduction and student outcome improvements
  • Governance Frameworks: QEF-funded projects need sustainability beyond initial implementation — our governance modules address long-term capability maintenance
  • Vendor-Neutral Evaluation: How do schools compare 22 different EdTech solutions? Platform-agnostic assessment frameworks enable apples-to-apples comparison
  • Change Management Integration: Tool deployment ≠ adoption. Our change enablement frameworks help schools avoid "shelfware syndrome"

Question for QEF: Do funding criteria include capability development measures (what teachers/students can do after training) vs just tool deployment measures (what got installed)?

Want to Compare Notes?

  • Catch Si during breaks (check InnoSpace or Aviation Training Hub guided tours)
  • Schedule a 15-minute coffee chat via the link below
  • Or just say g'day if you spot the AIBoK badge

Book a Chat with Si

Get Our HKPC Forum Debrief: "AI in HK Education — Field Notes & Capability Gaps"

After the forum, we'll publish:

  • What we saw: Summary of 60+ EdTech solutions and QEF-funded projects
  • What we heard: Key themes from principal panels and educator workshops
  • What we learned: Capability gaps that platform-agnostic training could address
  • What we're testing: Whether our enterprise model translates to education

This isn't a sales document — it's genuine field research shared openly.

Join Our Mailing List (Debrief Late Feb 2026)

What Our Enterprise Beta Testers Say (Education Pilots Coming Soon)

"AIBoK's diagnostic-led approach helped us move beyond 'AI for AI's sake' to measurable productivity gains. The vendor-neutral framework means we're not locked into any single tool."
— Beta Tester, Enterprise IT Leader
"The face-to-face workshops made all the difference. Our team needed hands-on practice with real use cases, not theoretical lectures."
— Beta Tester, Knowledge Worker Team Lead

We're Now Testing Whether This Approach Works for Educators

If you're a school principal, IT coordinator, or EdTech decision-maker interested in systematic AI capability building (not just tool training), join our education pilot waitlist.

Join Education Pilot Waitlist

For EdTech Vendors: Train-the-Trainer Partnerships

Are you showcasing at HKPC Forum?

If your solution requires user training, we're exploring train-the-trainer partnerships where:

  • You provide the tool/platform
  • We provide capability-building methodology
  • Schools get systematic onboarding (not just tool manuals)

Particularly relevant for QEF eLAFP-funded projects requiring measurable capability development outcomes for funding compliance.

Early-stage conversations only — we're testing education sector fit.

Explore Partnership Opportunities

Get Forum Debrief Meet Si at HKPC Join Waitlist