CareerForge
← Hub
Demo Guide · Use Case

Use Case

What does the 70% look like, built? CareerForge by Accesa.

Not an AI product. An organisational readiness system that has AI in it — in the right place, on a foundation that earns it.

AI Features & What They Depend On
Learning paths

Personalised node-graph path spanning skills, certifications, projects, milestones. Generated by AI, grounded in current market data.

structured skill taxonomy
Role suggestions

5 emerging roles with match scores incl. bench, based on specialisation + seniority. Surfaces internal mobility opportunities automatically.

reliable profile data
Market intelligence

6 tech trends every 6h, role demand catalogue every 2h. Live web search via Perplexity Sonar Pro — no stale embeddings.

clean, unfragmented data pipeline
Coaching advice

Coach asks AI about employee via pseudo-ID. Specific, market-grounded, auditable. No PII ever leaves the system boundary.

governance baked into data model
Skill gap analysis

Top 8 required skills for any target role, with proficiency level + market demand score. Computed against the employee's current skill set.

validated, current skill data
Intelligence digest

Weekly AI report: top rising skills, role movements, recommended actions for the coaching team. Cached, refreshable on demand.

clean connector data, no silos
Mentor Matcher

Coach portal ranks available mentors against any employee: 55% skill overlap, 30% seniority fit, 15% assignment load. Deterministic — no LLM cost.

structured skill + seniority data
Opportunity AI Match

Manager publishes an opportunity — AI ranks all 212 employees by skill overlap and produces a narrative for the top 3 candidates. Every result is logged in an append-only audit trail.

reliable profile & skill data
Demo Flows by Persona
1
Employee
Profile
Skill heatmap
Role suggestions incl. bench
AI learning path
2
Coach
Employees list
Employee detail (pseudo-ID)
AI coaching question
Mentor Match card
Analytics & digest
3
Manager
Publish opportunity
AI Match → candidate shortlist
Team readiness scores
Pending sign-offs
4
Admin
Dashboard stats
Configure AI Workflow
Suggestion Audit log
4.0 Readiness score
Users & API keys

None of these six features work without the 70% underneath them.
That is not a coincidence. That is a design choice.

What would your organisation's readiness score be today?

Every AI call is logged in an append-only audit trail — suggestion type, portal, feature, employee pseudo-ID, model used, sources, generated_at, expires_at. Configurable per-feature via the AI Workflow Connector in the Admin portal.

One active workflow config drives all portals. Switch provider, TTL, or guardrails without a code deploy.

AI Mechanics — How Each Suggestion Works
◉  Employee Portal
Role SuggestionsLLM + RAG
How it’s computed

Employee’s skills, seniority, and specialisation are serialised into a prompt. The active AI Workflow provider (Claude / Perplexity / OpenAI / Gemini / self-hosted) is called via get_role_suggestions_for_profile().

RAG searches ai_documents for relevant roles-catalogue and trend chunks, prepends them as context, then asks the LLM to return the top N most relevant roles for this profile.

A deterministic match_score (0.0–1.0) is appended per role: |emp_skills ∩ required| / |required|. Roles below min_match_score are filtered before returning.

Guardrails (coverage_config)
// employee → role_suggestions { "enabled": true, "ttl_hours": 24, "model_override": null, "guardrails": { "max_roles": 5, "min_match_score": 0.3 } }
Audit log entry
{ "suggestion_type": "role_suggestions", "portal": "employee", "feature": "role_suggestions", "employee_pseudo_id": "EMP-XXXXX", "workflow_config_id": 1, "model_used": "claude-sonnet-4-6", "sources_used": ["ai_documents","roles_catalogue"], "generated_at": "2026-04-27T13:52Z", "expires_at": "2026-04-28T13:52Z" // TTL resolved: coverage_config → 24h }
Learning PathLLM + RAG
How it’s computed

Skill set, career goal, and seniority gap are combined with RAG context (skill docs, cert data). The LLM is asked to return a node-graph JSON with node types: skill, cert, project, milestone.

Output is validated for JSON structure. Nodes exceeding max_nodes are trimmed (lowest-priority last). If include_certs: false, cert nodes are stripped. Cached 48h — long enough to avoid redundant calls, short enough to reflect market shifts.

Guardrails (coverage_config)
// employee → learning_path { "enabled": true, "ttl_hours": 48, "model_override": null, "guardrails": { "max_nodes": 12, "include_certs": true } }
Audit log entry
{ "suggestion_type": "learning_path", "portal": "employee", "feature": "learning_path", "employee_pseudo_id": "EMP-XXXXX", "model_used": "claude-sonnet-4-6", "sources_used": ["ai_documents","skill_taxonomy"], "generated_at": "2026-04-27T13:52Z", "expires_at": "2026-04-29T13:52Z" // TTL: 48h }
Opportunity Match ScoreDeterministic
How it’s computed

No LLM. For every published opportunity, at query time:

overlap = emp_skills ∩ opp.required_skills score = len(overlap) / len(required_skills) # if required_skills empty → 0.0

Badge colour: green ≥70%, yellow 40–69%, grey <40%. Score attached to every row in the opportunities list — no separate call needed.

Guardrails (coverage_config)
// employee → opportunities_matching { "enabled": true, "ttl_hours": 6, "model_override": null, "guardrails": { "min_match_score": 0.2 } }

Opportunities below min_match_score are still shown but de-emphasised. Disabling the feature hides all badges.

Audit log entry
{ "suggestion_type": "opportunity_match", "portal": "employee", "feature": "opportunities_matching", "employee_pseudo_id": "EMP-XXXXX", "model_used": "none (deterministic)", "sources_used": ["employee_skills","opportunity_requirements"], "generated_at": "2026-04-27T13:52Z", "expires_at": "2026-04-27T19:52Z" // TTL: 6h }
AI Chat / Career AdviceLLM + RAG
How it’s computed

Free-text question + anonymised profile (pseudo-id, skills, seniority — no PII) sent to the active LLM provider. RAG retrieves the 5 most relevant document chunks from ai_documents and prepends them as grounding context.

Short 1h TTL — conversational responses should feel fresh. safe_mode: true injects a system instruction preventing PII inference, speculation on colleagues, or absolute hiring/firing recommendations.

Guardrails (coverage_config)
// employee → ai_chat { "enabled": true, "ttl_hours": 1, "model_override": null, "guardrails": { "max_tokens": 1024, "safe_mode": true } }
Audit log entry
{ "suggestion_type": "ai_chat", "portal": "employee", "feature": "ai_chat", "employee_pseudo_id": "EMP-XXXXX", "model_used": "claude-sonnet-4-6", "sources_used": ["ai_documents","employee_profile"], "generated_at": "2026-04-27T13:52Z", "expires_at": "2026-04-27T14:52Z" // TTL: 1h — conversational }
▦  Coach Portal
Coaching AdviceLLM + RAG
How it’s computed

Coach submits a question referencing an employee by pseudo-id. Backend loads the anonymised profile (no PII decryption) and passes it + market-intelligence RAG chunks to the active LLM provider.

require_pseudo_id: true means the endpoint rejects any request without a valid, existing pseudo-id — no anonymous bulk queries are possible. Every response is market-grounded and cites sources.

Guardrails (coverage_config)
// coach → coaching_advice { "enabled": true, "ttl_hours": 4, "model_override": null, "guardrails": { "require_pseudo_id": true, "safe_mode": true } }
Audit log entry
{ "suggestion_type": "coaching_advice", "portal": "coach", "feature": "coaching_advice", "employee_pseudo_id": "EMP-XXXXX", "model_used": "claude-sonnet-4-6", "sources_used": ["ai_documents","market_intelligence"], "generated_at": "2026-04-27T13:52Z", "expires_at": "2026-04-27T17:52Z" // TTL: 4h — pseudo_id required, PII never leaves boundary }
Market IntelligenceLLM + RAG
How it’s computed

Connectors (tech news feeds, job market APIs) ingest data hourly via Celery Beat into ai_documents. The RAG layer retrieves the freshest chunks. The LLM synthesises them into trend objects: { topic, trend_direction, summary, confidence, sources[] }.

Not employee-specific — one shared cache entry per topic. Individual source URLs are cited so coaches can verify any claim.

Guardrails (coverage_config)
// coach → market_intelligence { "enabled": true, "ttl_hours": 6, "model_override": null, "guardrails": { "max_trends": 6 } }
Audit log entry
{ "suggestion_type": "market_intelligence", "portal": "coach", "feature": "market_intelligence", "employee_pseudo_id": null, // not employee-scoped "model_used": "claude-sonnet-4-6", "sources_used": ["connector:tech-news","connector:job-market"], "generated_at": "2026-04-27T13:52Z", "expires_at": "2026-04-27T19:52Z" // TTL: 6h }
Skill Gap AnalysisLLM + RAG
How it’s computed

Target role title + employee’s current skill set sent to the LLM. RAG retrieves role requirement docs and recent job-posting data. The model returns the top 8 skills the employee is missing or weak in, with required_level and market_demand score grounded in connector data.

Guardrails (coverage_config)
// coach → skill_gap { "enabled": true, "ttl_hours": 12, "model_override": null, "guardrails": { "max_skills": 8 } }

Output truncated to max_skills, sorted by market_demand descending. Only absent or below-required-level skills are included.

Audit log entry
{ "suggestion_type": "skill_gap", "portal": "coach", "feature": "skill_gap", "employee_pseudo_id": "EMP-XXXXX", "model_used": "claude-sonnet-4-6", "sources_used": ["ai_documents","roles_catalogue","job_postings"], "generated_at": "2026-04-27T13:52Z", "expires_at": "2026-04-28T01:52Z" // TTL: 12h }
Mentor MatcherDeterministic
How it’s computed

No LLM. Pool: employees with available_for_mentoring=true, seniority ≥ employee seniority, pseudo_id ≠ employee. Weighted score per candidate:

skill_score = |mentor_skills ∩ emp_skills| / |emp_skills| sen_gap = seniority_rank(mentor) - seniority_rank(emp) seniority_score = max(0, 1 - max(0, sen_gap - 2) * 0.15) load_score = 1 - (active_assignments / 10) match_score = skill_score * 0.55 + seniority_score * 0.30 + load_score * 0.15
Guardrails (coverage_config)
// coach → mentor_matching { "enabled": true, "ttl_hours": 24, "model_override": null, "guardrails": { "min_match_score": 0.2, "max_mentors": 5 } }

Candidates below min_match_score excluded. Capped at max_mentors. Zero LLM cost — entirely data-model driven. Only pseudo-ids exposed — no names.

Audit log entry
{ "suggestion_type": "mentor_matching", "portal": "coach", "feature": "mentor_matching", "employee_pseudo_id": "EMP-XXXXX", "model_used": "none (deterministic)", "sources_used": ["employee_skills","mentor_assignments","seniority_index"], "generated_at": "2026-04-27T13:52Z", "expires_at": "2026-04-28T13:52Z" // TTL: 24h — deterministic, cached to avoid repeated DB scans }
⊞  Manager Portal
Opportunity AI Candidate MatchLLM + Deterministic
How it’s computed

Phase 1 — Deterministic (all 212 employees):

for emp in all_employees: overlap = emp.skills ∩ opp.required_skills score = len(overlap) / len(required) candidates.append({ emp, score, overlap, missing })

Sorted by score desc. Filtered by min_match_score. Capped at max_candidates.

Phase 2 — LLM narrative (top 3 only): opportunity description + top-3 anonymised profiles sent to the active provider for a concise hiring narrative per candidate.

Guardrails (coverage_config)
// manager → opportunities_ai_match { "enabled": true, "ttl_hours": 2, "model_override": null, "guardrails": { "max_candidates": 10, "min_match_score":0.3 } }

Short 2h TTL — opportunity requirements may change. LLM only sees: pseudo_id, specialisation, seniority, skill_list. No names, no email, no personal data.

Audit log entry
{ "suggestion_type": "opportunities_ai_match", "portal": "manager", "feature": "opportunities_ai_match", "employee_pseudo_id": null, // opportunity-scoped, not per-employee "payload_json": { "opportunity_id": 42, "total_evaluated": 212, "candidates_shown": 10 }, "model_used": "claude-sonnet-4-6", "generated_at": "2026-04-27T13:52Z", "expires_at": "2026-04-27T15:52Z" // TTL: 2h }
Team Readiness ScoreML Model
How it’s computed

Not an LLM — an XGBoost model trained on historical skill progression, certification completions, IDP activity, and engagement data. Produces a score 0–100 per employee.

Features: skill_count, cert_count, idp_completion_rate, seniority_index, last_activity_days, engagement_score. Model version tracked in MLflow; promoted to production by an admin trigger — no code deploy required.

Guardrails (coverage_config)
// manager → team_readiness { "enabled": true, "ttl_hours": 24, "model_override": null, "guardrails": { "show_names": false } }

show_names: false enforces pseudo-id display at all times. PII reveal requires an explicit employee consent flow. 24h TTL matches the daily reporting cadence.

Audit log entry
{ "suggestion_type": "team_readiness", "portal": "manager", "feature": "team_readiness", "employee_pseudo_id": "EMP-XXXXX", "model_used": "xgboost-readiness-v3 (MLflow)", "sources_used": ["employee_features","skill_records","idp_history"], "generated_at": "2026-04-27T13:52Z", "expires_at": "2026-04-28T13:52Z" // TTL: 24h — ML model, no API cost }
⊕  AI Workflow Connector — Audit & Config
Audit Trail — every suggestion, every portalAppend-only
How it works

Every call to any AI function invokes _log_suggestion(portal=, feature=) before returning. The TTL is resolved by _get_feature_ttl(workflow, portal, feature): reads coverage_config[portal][feature].ttl_hours from the active config, falls back to suggestion_ttl_hours global, then to 24h hardcoded default.

The log is append-only — no UPDATE or DELETE is ever issued. Admins browse it via Admin → Connectors → Suggestion Audit tab, filterable by portal, feature, pseudo-id, or date range.

Active config resolution (5-min cache)
# rag.py — called on every AI function async def _get_active_workflow(): # in-process 5-min cache if _cache and (now - _ts) < 300: return _cache cfg = await db.scalar( select(AIWorkflowConfig) .where(AIWorkflowConfig.is_active == True) ) _cache = cfg # update cache return cfg

Switching provider in Admin UI takes effect within 5 minutes across all services — no restart required.

Full log table schema
-- ai_suggestion_logs (PostgreSQL) id SERIAL PRIMARY KEY suggestion_type VARCHAR(64) portal VARCHAR(32) feature VARCHAR(64) employee_pseudo_id VARCHAR(32) -- null if not emp-scoped workflow_config_id INT FK payload_json JSONB model_used VARCHAR(128) sources_used TEXT[] generated_at TIMESTAMPTZ DEFAULT now() expires_at TIMESTAMPTZ -- expires_at = generated_at + ttl_hours interval
Per-feature Config — live guardrail changes, no deployAdmin UI
What you can control

In Admin → Connectors → AI Workflow, each config has expandable per-feature panels. Per feature you can change:

  • Enabled — toggle off without a code change
  • TTL hours — how long results are cached before re-generation
  • Model override — cheaper model for bulk scoring, smarter model for coaching advice
  • Guardrail key/values — any key shown above; live after the 5-min cache window
Self-hosted AI support

Set provider to self_hosted. Enter a base_url (e.g. http://ollama:11434). The connector auto-appends /v1 and calls /v1/chat/completions (OpenAI-compatible). Bearer auth optional. 120s timeout.

{ "provider": "self_hosted", "model_name": "llama3.1:8b", "base_url": "http://ollama:11434", "api_key": "" // optional bearer }

Works with Ollama, LM Studio, vLLM, or any OpenAI-API-compatible server.

ai_workflow_configs schema
-- ai_workflow_configs (PostgreSQL) id SERIAL PK name VARCHAR(128) provider VARCHAR(32) -- anthropic/openai/perplexity/gemini/self_hosted model_name VARCHAR(128) api_key_encrypted TEXT -- AES-256-GCM base_url VARCHAR(512) -- self-hosted only is_active BOOLEAN suggestion_ttl_hours INT -- global fallback TTL coverage_config JSONB -- per-feature overrides sources_config JSONB notes TEXT
Step-by-Step Walkthrough

Employee Demo — Maria Demo

  1. 1
    Open Employee Portal & Sign In
    Navigate to demo.lupulup.com and click Sign in with SSO. Use credentials from the tab below. You'll be redirected through Keycloak back to the portal.
  2. 2
    Dashboard — the 70% in action
    The Dashboard loads the employee's readiness score, AI role suggestions, and a learning path preview — all computed from structured data. Show role suggestions card
  3. 3
    Learning Path — AI-generated, not generic
    Navigate to Learning Path in the sidebar. The path is tailored to Maria's current skills and career goals. Hit Generate New Path to trigger a live AI call. Note the node types: skill, cert, project, milestone.
  4. 4
    Opportunities & Applications
    Navigate to Opportunities. 25 live opportunities across project, bench, L&D, R&D types. Apply to one — watch the cover note flow through to the Coach portal's applications view.
  5. 5
    Dashboard AI Advice — ask a real question
    Back on Dashboard, type a specific question in the AI Advice box (e.g. "What certifications should I pursue to move into a tech lead role?") and click Get Advice. Powered by Perplexity Sonar Pro with live web search. Best demo moment
  6. 6
    My Profile — PII governance demo
    Navigate to My Profile. The profile stores only anonymised data (pseudo-ID: EMP-WUZNZ). Full name and email are AES-256-GCM encrypted at rest — only the employee and authorised coaches can see them.
Email maria.demo@accesa.eu Password Demo2026! Open Portal →

Coach Demo — Coach Portal

  1. 1
    Open Coach Portal & Sign In
    Navigate to demo-coach.lupulup.com. Use a coach account (created in the coach Keycloak realm). The Coach Dashboard shows employees, skill heatmap, bench health, and intelligence digest.
  2. 2
    Employees — browse 212 anonymised profiles
    Navigate to Employees. All 212 employees are listed by pseudo-ID. Filter by specialisation, seniority, or skill. No PII visible — this is the privacy-by-design architecture. Highlight: pseudo-ID not name
  3. 3
    Employee Detail — AI Coaching in context
    Click any employee. In the AI Coaching Advice card, ask a specific question: "What skills gaps does this engineer have for a senior backend role?". The AI has access to their anonymised profile and market data. Best demo moment
  4. 4
    Opportunities — manage bench & L&D
    Navigate to Opportunities. Create a new opportunity or review applications. The Suggestions button on any opportunity runs AI matching against all 212 employees by skill overlap + seniority.
  5. 5
    Analytics — Intelligence Digest
    Navigate to Analytics. The Intelligence Digest is a weekly AI-generated report on skill trends, role market movements, and recommended L&D actions. Hit Refresh to regenerate live via Perplexity.
  6. 6
    Analytics — Skill Heatmap
    Still in Analytics, the Skill Heatmap shows the most common skills across all employees. Filter by specialisation to see which skills are concentrated in which teams — useful for capacity planning.
Realm coach (auth-demo.lupulup.com) Password Demo2026! Open Portal →

Manager Demo — Alex Manager

  1. 1
    Open Manager Portal & Sign In
    Navigate to demo-manager.lupulup.com. Use Alex's credentials below. The system verifies the manager realm role in the JWT — no manager role, no access.
  2. 2
    Dashboard — team KPIs at a glance
    The Dashboard shows: team size (10 direct reports), pending sign-offs, and average team Industry 4.0 readiness score. Computed from real skill data — not a mock number. Key differentiator
  3. 3
    My Team — per-member readiness
    Navigate to My Team. Each of Alex's 10 direct reports is listed with their role, seniority, skills, and individual Industry 4.0 readiness score. Click any member to drill into their forms and dev plan history.
  4. 4
    Pending Sign-offs — the governance moment
    Navigate to Pending Sign-offs. Development plan forms sent by coaches that need manager approval appear here. Opening a form shows the commitments, timeline, and the sign-off chain: employee committed → coach signed → manager signs. Show the 3-party sign-off chain
  5. 7
    Sign off a dev plan
    From the form detail page, click Sign Off. The system records the timestamp and marks the form as fully active. The employee will see their dev plan become live in their portal.
Email alex.manager@accesa.eu Password Demo2026! Open Portal →

Admin Demo — admin.demo

  1. 1
    Open Admin Portal & Sign In
    Navigate to demo-admin.lupulup.com. Log in with admin.demo credentials. The system checks the admin realm role in the JWT — no admin role, no access.
  2. 2
    Dashboard — live platform stats
    The Dashboard shows real-time platform health: total registered employees, active coaches, enabled/total data connectors, indexed documents, and chunks ingested in the last 24h. Everything computed from live data — not mocked. Key differentiator
  3. 3
    Connectors → Data Connectors tab
    Navigate to Connectors. The Data Connectors tab lists every data source feeding the RAG pipeline — source type, vector namespace, fetch interval, enabled/disabled status. You can add from catalog or define a custom JSON/Python connector on the spot.
  4. 4
    Connectors → AI Workflow tab — the control room
    Switch to the AI Workflow tab. This is where the active AI provider is configured — provider (Anthropic / OpenAI / Perplexity / Gemini / self-hosted), model name, API key, and per-portal feature coverage (coverage_config). Only one config can be active at a time. Activating a new one deactivates all others instantly — no restart, takes effect within 5 minutes across all services. Show the per-feature guardrails panel: TTL, min match score, max results. Show the self-hosted option (Ollama/LM Studio)
  5. 5
    Connectors → Suggestion Audit tab — append-only trail
    Switch to the Suggestion Audit tab. Every single AI suggestion ever made is logged here: suggestion type, portal, feature, employee pseudo-ID, model used, sources used, generated_at, expires_at. Filter by portal, feature, or pseudo-ID. No UPDATE or DELETE is ever issued — the log is fully append-only. This is the governance story — show it
  6. 6
    4.0 Readiness — org-wide score
    Navigate to 4.0 Readiness. The company-wide Industry 4.0 readiness score (0–100) is shown with a level label: Leader / Advanced / Developing / Starting. The score aggregates across all registered employees — skill count, IDP completions, seniority distribution, engagement. A number leadership can act on
  7. 7
    Users — cross-realm management
    Navigate to Users. Switch between realms (employee / coach / admin). You can view all users, their roles, and linked portal access. API Keys is available in the sidebar for managing scoped programmatic access tokens — useful for integration demos.
Email admin.demo@accesa.eu Password Demo2026! Open Portal →