What does the 70% look like, built? CareerForge by Accesa.
Not an AI product. An organisational readiness system that has AI in it — in the right place, on a foundation that earns it.
Personalised node-graph path spanning skills, certifications, projects, milestones. Generated by AI, grounded in current market data.
5 emerging roles with match scores incl. bench, based on specialisation + seniority. Surfaces internal mobility opportunities automatically.
6 tech trends every 6h, role demand catalogue every 2h. Live web search via Perplexity Sonar Pro — no stale embeddings.
Coach asks AI about employee via pseudo-ID. Specific, market-grounded, auditable. No PII ever leaves the system boundary.
Top 8 required skills for any target role, with proficiency level + market demand score. Computed against the employee's current skill set.
Weekly AI report: top rising skills, role movements, recommended actions for the coaching team. Cached, refreshable on demand.
Coach portal ranks available mentors against any employee: 55% skill overlap, 30% seniority fit, 15% assignment load. Deterministic — no LLM cost.
Manager publishes an opportunity — AI ranks all 212 employees by skill overlap and produces a narrative for the top 3 candidates. Every result is logged in an append-only audit trail.
None of these six features work without the 70% underneath them.
That is not a coincidence. That is a design choice.
What would your organisation's readiness score be today?
Every AI call is logged in an append-only audit trail — suggestion type, portal, feature, employee pseudo-ID, model used, sources, generated_at, expires_at. Configurable per-feature via the AI Workflow Connector in the Admin portal.
One active workflow config drives all portals. Switch provider, TTL, or guardrails without a code deploy.
Employee’s skills, seniority, and specialisation are serialised into a prompt. The active AI Workflow provider (Claude / Perplexity / OpenAI / Gemini / self-hosted) is called via get_role_suggestions_for_profile().
RAG searches ai_documents for relevant roles-catalogue and trend chunks, prepends them as context, then asks the LLM to return the top N most relevant roles for this profile.
A deterministic match_score (0.0–1.0) is appended per role: |emp_skills ∩ required| / |required|. Roles below min_match_score are filtered before returning.
Skill set, career goal, and seniority gap are combined with RAG context (skill docs, cert data). The LLM is asked to return a node-graph JSON with node types: skill, cert, project, milestone.
Output is validated for JSON structure. Nodes exceeding max_nodes are trimmed (lowest-priority last). If include_certs: false, cert nodes are stripped. Cached 48h — long enough to avoid redundant calls, short enough to reflect market shifts.
No LLM. For every published opportunity, at query time:
Badge colour: green ≥70%, yellow 40–69%, grey <40%. Score attached to every row in the opportunities list — no separate call needed.
Opportunities below min_match_score are still shown but de-emphasised. Disabling the feature hides all badges.
Free-text question + anonymised profile (pseudo-id, skills, seniority — no PII) sent to the active LLM provider. RAG retrieves the 5 most relevant document chunks from ai_documents and prepends them as grounding context.
Short 1h TTL — conversational responses should feel fresh. safe_mode: true injects a system instruction preventing PII inference, speculation on colleagues, or absolute hiring/firing recommendations.
Coach submits a question referencing an employee by pseudo-id. Backend loads the anonymised profile (no PII decryption) and passes it + market-intelligence RAG chunks to the active LLM provider.
require_pseudo_id: true means the endpoint rejects any request without a valid, existing pseudo-id — no anonymous bulk queries are possible. Every response is market-grounded and cites sources.
Connectors (tech news feeds, job market APIs) ingest data hourly via Celery Beat into ai_documents. The RAG layer retrieves the freshest chunks. The LLM synthesises them into trend objects: { topic, trend_direction, summary, confidence, sources[] }.
Not employee-specific — one shared cache entry per topic. Individual source URLs are cited so coaches can verify any claim.
Target role title + employee’s current skill set sent to the LLM. RAG retrieves role requirement docs and recent job-posting data. The model returns the top 8 skills the employee is missing or weak in, with required_level and market_demand score grounded in connector data.
Output truncated to max_skills, sorted by market_demand descending. Only absent or below-required-level skills are included.
No LLM. Pool: employees with available_for_mentoring=true, seniority ≥ employee seniority, pseudo_id ≠ employee. Weighted score per candidate:
Candidates below min_match_score excluded. Capped at max_mentors. Zero LLM cost — entirely data-model driven. Only pseudo-ids exposed — no names.
Phase 1 — Deterministic (all 212 employees):
Sorted by score desc. Filtered by min_match_score. Capped at max_candidates.
Phase 2 — LLM narrative (top 3 only): opportunity description + top-3 anonymised profiles sent to the active provider for a concise hiring narrative per candidate.
Short 2h TTL — opportunity requirements may change. LLM only sees: pseudo_id, specialisation, seniority, skill_list. No names, no email, no personal data.
Not an LLM — an XGBoost model trained on historical skill progression, certification completions, IDP activity, and engagement data. Produces a score 0–100 per employee.
Features: skill_count, cert_count, idp_completion_rate, seniority_index, last_activity_days, engagement_score. Model version tracked in MLflow; promoted to production by an admin trigger — no code deploy required.
show_names: false enforces pseudo-id display at all times. PII reveal requires an explicit employee consent flow. 24h TTL matches the daily reporting cadence.
Every call to any AI function invokes _log_suggestion(portal=, feature=) before returning. The TTL is resolved by _get_feature_ttl(workflow, portal, feature): reads coverage_config[portal][feature].ttl_hours from the active config, falls back to suggestion_ttl_hours global, then to 24h hardcoded default.
The log is append-only — no UPDATE or DELETE is ever issued. Admins browse it via Admin → Connectors → Suggestion Audit tab, filterable by portal, feature, pseudo-id, or date range.
Switching provider in Admin UI takes effect within 5 minutes across all services — no restart required.
In Admin → Connectors → AI Workflow, each config has expandable per-feature panels. Per feature you can change:
Set provider to self_hosted. Enter a base_url (e.g. http://ollama:11434). The connector auto-appends /v1 and calls /v1/chat/completions (OpenAI-compatible). Bearer auth optional. 120s timeout.
Works with Ollama, LM Studio, vLLM, or any OpenAI-API-compatible server.
demo.lupulup.com and click Sign in with SSO. Use credentials from the tab below. You'll be redirected through Keycloak back to the portal.EMP-WUZNZ). Full name and email are AES-256-GCM encrypted at rest — only the employee and authorised coaches can see them.demo-coach.lupulup.com. Use a coach account (created in the coach Keycloak realm). The Coach Dashboard shows employees, skill heatmap, bench health, and intelligence digest.demo-manager.lupulup.com. Use Alex's credentials below. The system verifies the manager realm role in the JWT — no manager role, no access.demo-admin.lupulup.com. Log in with admin.demo credentials. The system checks the admin realm role in the JWT — no admin role, no access.coverage_config). Only one config can be active at a time. Activating a new one deactivates all others instantly — no restart, takes effect within 5 minutes across all services. Show the per-feature guardrails panel: TTL, min match score, max results. Show the self-hosted option (Ollama/LM Studio)