
Key takeaways
• An adaptive learning platform is three things at once. A learner model (what they know), a content model (items tagged to skills), and a policy that picks the next best item. Build all three or you have a quiz engine.
• Pick algorithms by stage, not by hype. Rule-based for the MVP, BKT or IRT once you have 500 learners, DKT and contextual bandits when you cross 50,000 sequences and need temporal modelling.
• Content bottleneck kills more projects than algorithms. You need at least 10 well-tagged items per skill node before adaptation feels intelligent. Plan content authoring on day one, not month nine.
• 2026 cost ranges. A serious MVP runs $90K–$160K in 3–4 months. A multi-subject platform with BKT and analytics is $260K–$420K in 5–7 months. Enterprise with DKT, knowledge graphs, and LMS integrations is $650K–$1.1M to first launch.
• Fora Soft has shipped this exact stack. BrainCert (WebRTC virtual classroom LMS, $3M rev), Scholarly (15K+ users), Tabsera (multilingual virtual school for Somaliland) — with our agentic engineering pipeline cutting 20–30% off market timelines.
Why Fora Soft wrote this playbook
We have been building e-learning software since 2005, and adaptive layers on top of it for the last decade. Our portfolio includes BrainCert — a WebRTC virtual-classroom LMS that crossed $3M in revenue on a stack we co-built — and Scholarly, a learning platform with 15,000+ users. Tabsera, our multilingual virtual school for an entrepreneur in Somaliland, runs classes in English, French, Arabic, and Turkish, is backed by Telesom (the country’s largest mobile operator), and has been featured on national TV channel Eryal TV.
This guide is the buyer’s and builder’s view we wish every edtech founder, L&D director, school IT lead, and LMS-vendor CTO had on screen before scoping a custom adaptive system. Read it as a practical brief: the algorithms that matter, the architecture that scales, the compliance you cannot skip, the cost ranges that defend in a board meeting, and the pitfalls we have watched kill otherwise good projects.
For broader context on AI-powered e-learning, see our companion piece How to build powerful AI-powered multimedia solutions for e-learning and the integration-focused Integrating AI into e-learning software development.
Scoping an adaptive learning platform?
Send us your subject domain, learner profile, and target scale. We will return a one-week sketch covering algorithms, architecture, compliance, and a realistic budget — no pitch attached.
What an adaptive learning platform actually is
An adaptive learning platform is software that changes what a learner sees next based on what the system has inferred they know. It is not the same as personalised learning — personalised lets the learner pick the path; adaptive picks for them, in real time, from telemetry. The two are complementary: the strongest products do both.
Under the hood, every adaptive learning platform is three coupled systems:
- The learner model. A running estimate of mastery for each skill, expressed as a probability or score. Updated after every answer.
- The content model. A library of items (questions, lessons, simulations) tagged to skills, with calibrated difficulty and prerequisites.
- The policy. The decision rule that, given the current learner state, picks the next item. Rule-based, optimisation, or reinforcement learning.
Build only one or two of those and you have a quiz engine, not adaptation. Most failed projects under-invest in the content model or skip the policy entirely.
A 5-question decision framework before you write any code
1. What is the unit of mastery? A specific math skill ("solving linear equations in one variable"), a vocabulary item, a procedural step, a CME concept. The narrower the unit, the easier to model. Start narrow.
2. How many items can you author or generate per skill? The minimum for adaptation that does not feel like a loop is roughly 10 items per skill. Below that, the system circles back and learners notice in the first session.
3. Who owns the calibration data? If your items have never been answered before, you have no difficulty estimates, which means IRT and most BKT priors are guesses. Plan a paid pilot or use crowdsourced calibration before launch.
4. Does this need to live inside an LMS? If yes, build for LTI 1.3 from the MVP, not as an afterthought — provisioning, grade passback, and deep linking change the data model.
5. What is the consequence of a mistake? Adapting a vocabulary drill that briefly serves easier words is low-stakes. Routing a student into remedial maths or out of an honors track is high-stakes and demands human review and bias audits.
Build adaptive when: you have or can create ≥ 10 items per skill, you can run a 4–8 week pilot to bootstrap calibration, and at least one KPI of your business (knowledge gain, time-to-mastery, retention) materially depends on it.
The algorithm taxonomy: pick by stage, not by hype
There are five families of techniques that show up in serious adaptive learning platforms. The right one depends on data volume, content type, and the stake of the decision.
| Technique | Best for | Min data | Effort | Watch-out |
|---|---|---|---|---|
| Rule-based routing | MVP, narrow domains | None | 1–2 weeks | Brittle as content grows |
| Bayesian Knowledge Tracing (BKT) | Skill mastery, K-12 maths, vocab | ~500 learners | 2–4 weeks (pyBKT) | Assumes skills are independent |
| Item Response Theory (1PL/2PL/3PL) | High-stakes testing, CAT | ~500 responses per item | 4–6 weeks calibration | Needs psychometrics expertise |
| Spaced repetition (SM-2 / FSRS) | Vocab, flashcards, CME recall | None | 1–2 weeks (SM-2) | Weak for problem-solving |
| Deep Knowledge Tracing (DKT) | Long sequences, sparse skill labels | 50K+ sequences | 6–10 weeks | Black-box; needs MLOps |
| RL / contextual bandits | Next-best-item optimisation | Live cohort >1K | 8–12 weeks | Off-policy evaluation required |
Bayesian Knowledge Tracing (BKT) — the workhorse
BKT, originating at Carnegie Mellon, models mastery as a hidden binary state per skill. Four parameters capture the dynamics: prior knowledge p(L0), learning rate p(T), guess p(G), and slip p(S). It is interpretable, computationally cheap, and ships in the open-source pyBKT library. Use it for the second iteration of any platform that has graduated from rule-based routing — once you have a few hundred learners and labelled item-skill pairs.
Item Response Theory and CAT — for high-stakes testing
IRT is the psychometric backbone of computerised adaptive tests like the GRE and GMAT, where test length is reduced 20–40% while precision is maintained. The 1PL Rasch model fits difficulty only; 2PL adds discrimination; 3PL adds a guessing parameter, useful for multiple choice. If your product runs certifications, placement, or licensing exams, you almost certainly need IRT and a calibration phase before you can adapt anything safely.
Deep Knowledge Tracing (DKT) and the modern wave
Stanford’s DKT (Piech et al.) reframed knowledge tracing as a sequence-modelling problem with LSTMs, picking up roughly 25% AUC over BKT on the AssisTments dataset. Modern variants (SAINT, AKT, gated GRU architectures from 2024–2025 papers) push this further. The trade-off is data volume and interpretability — DKT needs 50K+ learner sequences to train robustly and is harder to debug when a recommendation looks wrong.
Spaced repetition (SM-2, FSRS) for fact recall
For vocabulary, formulas, drug names, and any other isolated-fact content, spaced repetition is hard to beat. SM-2 (the Anki classic) is simple. FSRS — the neural-network memory model now shipped in Anki 23.10+ — reduces review load by roughly 20–30% versus SM-2 at the same recall target. Use these inside a broader adaptive system, not as the entire system.
Reinforcement learning and contextual bandits
When you have an active cohort and a clear objective (minimise time-to-mastery, maximise engagement), contextual bandits like Thompson sampling balance exploration and exploitation across item choices. Reserve this for stage-3 platforms with the MLOps to support off-policy evaluation; deployed naively, RL can drive learners into unproductive content loops.
A reference architecture for an adaptive learning platform
The architecture below is the one we deploy at Fora Soft for adaptive systems past the MVP stage. Components in italics are optional for the MVP and required for scale.
- Frontend. React or Vue web; React Native or Flutter mobile; WCAG 2.2 AA from day one. Live regions for announcing adaptive content changes to screen readers.
- API gateway. FastAPI or Node, OAuth 2.0 + OIDC. LTI 1.3 endpoints for LMS integration (roster sync, grade passback, deep linking).
- Learning Record Store (LRS). xAPI events into Kafka or Redpanda, archived in S3/GCS. Watershed, Learning Locker, or a custom store.
- Item bank. PostgreSQL for metadata; tags for skills, prerequisites, difficulty (calibrated), Bloom level, language, accessibility flags.
- Feature store. Feast or a Redis cache that serves learner embeddings, item difficulties, and prerequisite vectors at <100ms.
- Mastery model service. BKT or DKT inference behind FastAPI or Ray Serve. Stateless; reads from feature store.
- Recommendation engine. Reads mastery state, applies policy (rule, IRT max-information, bandit), returns next item.
- Knowledge graph. Neo4j or RDF for prerequisite ontology and learning paths.
- Analytics. Looker, Metabase, or custom dashboards over a warehouse (BigQuery, Snowflake, or Postgres at smaller scale).
- Observability. Prometheus + Grafana for service health; Evidently or WhyLabs for model-drift monitoring.
The interoperability layer matters as much as the model. Plan for LTI 1.3, xAPI, and (for legacy customers) SCORM 1.2 and 2004; QTI 2.1 helps if you need to import or export item banks. For a fuller treatment of media-rich e-learning architecture, see our scalable video streaming and conferencing guide.
Compliance: COPPA, FERPA, GDPR-K, and the rest
Education software handles minors and academic records. Skip the compliance layer and procurement will kill your deal in week six.
COPPA (US, learners under 13). The 2026 update requires separate parental consent for AI features — tutors, writing assistants, adaptive engines — on top of any account-creation consent. No behavioural advertising. Build a consent UI distinct from sign-up, and automate data deletion within a documented window.
FERPA (US, K-12 and higher ed). Student records may be disclosed only for educational purpose. Parents have access and amendment rights. Implement role-based access control, log every disclosure, and audit your vendor agreements.
GDPR-K (EU, learners under 16, threshold varies by member state). Explicit parental consent, right to deletion, right to explanation of automated decisions. Use differential privacy or k-anonymity (k ≥ 5) for any aggregate analytics that touch demographic data.
SOC 2 Type II and ISO 27001. Required for enterprise procurement, especially in higher ed and corporate L&D. Plan a 12-month audit roadmap if you do not have one.
Accessibility (WCAG 2.2 AA). Adaptive UIs are a special challenge: dynamic content updates must announce, colour-coded difficulty must not be the only signal, and keyboard navigation must remain coherent as the path changes. Test with NVDA on Windows and VoiceOver on macOS/iOS, and budget for screen-reader testers in your QA team.
Need a compliance & architecture review of your adaptive plan?
Send us your spec or current draft. We will mark up COPPA, FERPA, GDPR-K, LTI/xAPI, and the algorithm choice in 48 hours.
2026 cost ranges — MVP to enterprise
These ranges are calibrated to projects we have shipped and to 2026 market rates. They include engineering, ML, design, QA, and infrastructure for the build phase only — content authoring, IRT calibration studies, and pedagogical advisory are usually separate line items. Because we run our pipeline on spec-driven AI agents, our timelines lean toward the lower bound of each range.
| Tier | Scope | Build cost | Time to first launch | Annual ops |
|---|---|---|---|---|
| MVP | Single subject, rule-based, 100–300 items, web | $90K–$160K | 3–4 months | $8K–$20K |
| Scale | Multi-subject, BKT or IRT, 500–2K items, mobile, analytics | $260K–$420K | 5–7 months | $35K–$80K |
| Enterprise | DKT + knowledge graph + LMS integrations + SOC 2 | $650K–$1.1M | 8–12 months | $110K–$260K |
| Buy off-the-shelf | ALEKS / Knewton / DreamBox seat licences | $0 build | Days to weeks | $15–$200/seat/yr |
For more on how we estimate, see our software estimation playbook. Off-the-shelf platforms win on speed and lose on customisation, IP ownership, and unit economics above ~5K concurrent learners. Custom builds win on differentiation and on integrations the SaaS players will not implement for you.
The team shape that ships an adaptive learning platform
An adaptive build is partly a software project and partly an applied-ML project. A vendor that proposes only software roles is missing half the work.
- ML / data engineer (1–2). Owns the learner model, item calibration, feature store, drift monitoring.
- Backend engineer (1–2). APIs, LRS, integration glue, LTI/xAPI/SCORM endpoints.
- Frontend / mobile engineer (1–2). Web first; React Native or Flutter for mobile.
- DevOps / SRE (0.5). Infra-as-code, GPU training jobs, model serving, observability.
- Product / pedagogy lead (0.5). Translates learning objectives into skill graphs and policy rules. Hire someone who has actually taught the subject.
- QA, accessibility tester (1). Cross-browser, screen-reader, network-impairment, and pedagogical correctness testing.
- Project manager (0.5). Sprint cadence, weekly reporting, change-order management.
Total headcount lands in the 4–7 range for MVP-to-Scale, and 8–12 for enterprise. For more on how Fora Soft assembles teams, see our project discovery process.
Mini case — Tabsera, a virtual school adapted to a low-bandwidth market
Situation. An entrepreneur in Somaliland wanted a virtual school that worked across English, French, Arabic, and Turkish for students in regions where high-bandwidth video is unreliable.
What we built. Tabsera — a multi-role platform where users can give lectures as teachers, manage schools as principals, or attend classes as students from anywhere in the world. Adaptive elements at the lesson level (skill recap, resequencing) sit on top of a virtual-classroom core that gracefully degrades to audio-only and asynchronous modes when networks are weak.
Outcome. Backed by Telesom (Somaliland’s largest mobile operator) and featured on national channel Eryal TV. The same architectural pattern — an adaptive layer wrapped around a robust media core — is what we deploy on BrainCert ($3M+ revenue) and Scholarly (15K+ users).
Five pitfalls that quietly kill adaptive learning platforms
1. The cold-start problem, ignored. A new learner has no history, so your recommendations are guesses for the first 10–30 items. Mitigation: a 5–10 minute pre-test, a self-rating prompt, or active-learning item selection that maximises information gain. Prediction confidence should stabilise by item 20–30.
2. The content bottleneck. Adaptation needs at least 10 well-tagged items per skill. Below that, the system loops. Solution: budget content authoring as a parallel workstream from week one; consider LLM-generated drafts reviewed by subject-matter experts.
3. Overfitting to early answers. Two wrong answers should not lock a learner into remediation. Use Bayesian updating with reasonable priors, require 5+ independent observations before major routing changes, and give learners a way to flag a question as confusing.
4. Gaming the system. Some learners discover that wrong answers fetch easier content and exploit it. Mitigations: anomaly-detect fast wrong answers (< 5s on complex items), randomise within-skill order, and track strategic-failing patterns.
5. Interoperability as an afterthought. "We will add LTI later" is the surest way to lose enterprise sales. Build LTI 1.3, xAPI, and (where required) SCORM 2004 / cmi5 from sprint two. Test against Canvas, Moodle, and Brightspace.
KPIs that prove an adaptive learning platform actually works
Quality KPIs. Pre-to-post knowledge gain Cohen’s d ≥ 0.5; mastery-model AUC ≥ 0.75 (MVP), 0.82 (scale), 0.85+ (enterprise DKT); recommendation acceptance rate ≥ 60%; cold-start convergence by item 20–30.
Business KPIs. Time-to-mastery 30–50% lower than non-adaptive control; month-1-to-month-3 retention ≥ 70% in K-12, ≥ 60% in corporate L&D; module completion ≥ 60% (mandatory) or ≥ 40% (elective); demographic parity gap < 10% on mastery gain across gender, ethnicity, and socioeconomic status.
Reliability KPIs. System uptime ≥ 99.9% during testing windows; recommendation latency P95 < 250ms; LMS-sync uptime ≥ 99.5%; item-parameter drift < 0.3 logits over a 6-month window; 100% WCAG 2.2 AA pass on Axe + manual audit.
When NOT to build a custom adaptive learning platform
Some products do not need to build at all. Stay on a SaaS adaptive engine or a static LMS when:
- Your subject is well-covered by ALEKS, DreamBox, Knewton Alta, or a domain-specific adaptive product, and customisation is cosmetic.
- You have fewer than 1,000 learners on the horizon and per-seat economics work.
- Your differentiation is content, brand, or service — not adaptation itself.
- You cannot author or licence at least 10 items per skill before launch.
- You have no in-house owner for the system after handover. Adaptive platforms need ML-aware product care — one full-time engineer-equivalent at minimum.
If three or more of those apply, buy. If most do not — especially if your roadmap depends on integrations the SaaS players will not build for you, on data residency they cannot offer, or on adaptation in a domain they do not cover — building wins on a 2–3 year horizon.
Stay on SaaS when: a domain platform already covers your subject, your enrolment ceiling is <1K learners, and you have no in-house ML-aware owner. Build when integrations, residency, vertical depth, or unit economics force it.
2026 market context — size, growth, and the LLM wave
The global adaptive learning market is large enough that custom builds compete on niches rather than generality. Mordor Intelligence puts the 2026 market at roughly USD 5.3B with a CAGR of about 20% through 2030; vertical analysts (IMarc, Spherical Insights) place K-12 and higher education at 75% of spend, with corporate L&D and certification programmes catching up fast.
The LLM wave reshaped the landscape from 2024 onward. Khan Academy’s Khanmigo, Duolingo Max, and OpenAI’s tutor pilots demonstrated that conversational tutoring can complement — but not replace — classical knowledge tracing. Most serious platforms in 2026 combine an LLM tutor for explanation with a BKT or DKT layer for routing: the LLM does the talking, the tracer picks the next item.
The open-source toolbox your vendor should know cold
A capable adaptive-learning vendor in 2026 is fluent across this stack:
- pyBKT. Carnegie-Mellon’s Python BKT library; the production default for stage-2 platforms.
- EduCDM, EduData. Cognitive diagnosis and educational-data toolkits (BUAA), useful for advanced item-response modelling.
- PyTorch / Keras DKT implementations. Reference codebases for SAINT, AKT, and gated DKT variants.
- FSRS scheduler. The neural memory model behind modern Anki; ports for web and mobile exist.
- Feast / Tecton. Feature stores for sub-100ms inference at scale.
- Ray Serve / NVIDIA Triton / KServe. Model-serving on Kubernetes.
- Learning Locker, Watershed. Open-source xAPI Learning Record Stores.
- IMS Global LTI 1.3 / Caliper / QTI. The interoperability bedrock.
- Evidently, WhyLabs. Model-drift monitoring — not optional once you ship DKT.
For related reading on AI-assisted content creation that pairs naturally with adaptation, see tailored educational material generation techniques and the ultimate guide to AI-assisted educational content creation.
Vertical playbook — the answer is different for each market
K-12. COPPA dominates. Adaptation must work on Chromebooks, low-bandwidth networks, and shared devices. Pair adaptation with parent and teacher dashboards. Reference: how to create AI-generated educational resources for teachers.
Higher education. FERPA, accessibility (Section 508), LTI 1.3 against Canvas / Brightspace / Blackboard. Faculty want explainability — "why did the system recommend this?" — so prefer interpretable models for high-stakes paths.
Corporate L&D and compliance training. SCORM 2004 and cmi5 still rule procurement. Tie KPIs to time-to-competency and to compliance-renewal cycles. See our corporate training video platform guide.
Professional certification (medical, finance, IT). CAT and IRT are the bedrock; calibration studies are non-negotiable; accreditors will audit your psychometrics.
Language learning. Spaced repetition + pronunciation feedback + conversation simulation. FSRS for vocabulary, an LLM for dialogue practice.
Healthcare CME. HIPAA-adjacent if scenarios reference real patient data; otherwise SCORM 2004 + IRT-graded cases.
What a strong discovery phase produces
A 2–3 week paid discovery phase, run before any sprint zero, should ship the following artefacts. If your vendor cannot produce them, the project is at risk before it starts:
- Skill graph (concepts and prerequisites) for the launch domain.
- Algorithm choice with justification (rule-based vs. BKT vs. IRT vs. DKT) tied to data volume.
- Reference architecture diagram aligned to your cloud and existing LMS.
- Compliance map: COPPA, FERPA, GDPR-K, SOC 2, accessibility.
- A milestone-broken delivery plan with named engineers per role.
- A risk register — the 5–10 most likely things to go wrong, each with mitigation.
- An MSA + SoW with full IP transfer, source escrow, and exit clauses.
Want a 1-week discovery sketch on your domain?
Skill graph, algorithm choice, architecture, compliance, cost. We will hand it back with no obligation.
Red flags when picking an adaptive-learning vendor
Two of these and you walk away.
- "We will use AI" with no learner-model specifics. If the vendor cannot say "BKT" or "IRT" without prompting, they have not built one before.
- No content authoring plan. The single most common failure mode. Always ask: how will we get to 10+ items per skill?
- "LTI later" or no answer on xAPI. Enterprise procurement will block you.
- No accessibility testers in QA. WCAG 2.2 AA on adaptive UIs is non-trivial.
- Vendor owns the cloud account. Decline. Your AWS/GCP organisation, vendor IAM roles, clean offboarding.
- No bias / parity audit plan. High-stakes routing without disaggregated KPIs is a lawsuit waiting to happen.
- Reluctance on IP transfer or escrow. Hard stop.
FAQ
What is the difference between adaptive learning and personalised learning?
Adaptive is algorithm-driven and real time — the system picks the next item from telemetry. Personalised is learner-driven and deliberate — the student or instructor chooses the path. Strong adaptive learning platforms blend both: a default adaptive route with optional learner-controlled detours.
How much does it cost to build an adaptive learning platform in 2026?
An MVP with rule-based routing on a single subject runs $90K–$160K over 3–4 months. A multi-subject platform with BKT and analytics is $260K–$420K over 5–7 months. Enterprise builds with DKT, knowledge graphs, and LMS integrations land in the $650K–$1.1M range to first launch. Annual operations add 10–25% on top of build cost.
BKT or DKT — which should we use?
Start with BKT. It is interpretable, ships in pyBKT, and fits cleanly when you have labelled item-skill pairs. Move to DKT only when you have 50K+ learner sequences, your skill labels are sparse or noisy, and you have MLOps to monitor model drift. Most platforms never need DKT — the gain over a tuned BKT is real but moderate.
How do we integrate adaptive content with Canvas, Moodle, or Blackboard?
Use LTI 1.3 (OAuth 2.0 based). Your platform acts as an LTI tool provider; the LMS handles authentication, roster sync, and grade passback. Add xAPI on top for richer analytics into a Learning Record Store. Plan integration testing in sprint two, not sprint twenty — LMS quirks are real and will eat schedule.
What is the cold-start problem and how do we fix it?
A new learner has no history, so early recommendations are guesses. Mitigations: a 5–10 minute pre-test, a self-rating prompt, an active-learning policy that maximises information gain on the first 20 items, or a Bayesian prior seeded from population statistics. With those in place, prediction confidence stabilises by item 20–30.
Is COPPA / FERPA / GDPR-K compliance hard?
Not hard, but not optional. COPPA needs separate parental consent for AI features and automated deletion windows. FERPA needs documented disclosure logs and audited vendor agreements. GDPR-K needs explicit parental consent and k-anonymity (k ≥ 5) for aggregates. Plan all three from the architecture phase; retrofitting is 3–5x the cost.
Do we still need WCAG 2.2 AA on an adaptive UI?
Yes, and it is harder than on a static UI. Dynamic content updates must be announced via ARIA live regions; difficulty must not rely on colour alone; keyboard navigation must remain coherent as the path changes. Test with NVDA and VoiceOver, run Axe on every page, and budget for screen-reader testers in QA.
How do we know our adaptive learning platform actually works?
Pre-to-post knowledge gain (Cohen’s d ≥ 0.5), time-to-mastery reduction (30–50% versus a non-adaptive control), retention (≥ 70% K-12, ≥ 60% corporate), and mastery-model AUC (≥ 0.75 MVP, ≥ 0.85 enterprise). Always disaggregate by demographics to catch bias early.
What to Read Next
E-learning
AI-powered multimedia solutions for e-learning
The big-picture companion piece on AI in e-learning.
Integration
Integrating AI into e-learning software development
A practical guide to bolting AI onto an existing LMS.
Curriculum
Machine learning in curriculum development
Where ML actually shifts the syllabus, not just the UI.
Recommenders
Top content recommendation platforms for eLearning
A teardown of recommender architectures used in 2026.
Content
AI-assisted educational content creation
Solving the content bottleneck without losing rigour.
Ready to ship an adaptive learning platform that earns its keep?
Adaptive learning platforms reward teams that take three things seriously: the learner model, the content model, and the policy. Pick algorithms by stage, plan content as a parallel workstream, build interoperability and compliance from sprint two, and measure with KPIs that disaggregate by demographic. Skip any of those and you ship a quiz engine that costs more than a SaaS seat licence.
Fora Soft has been shipping that combination since 2005, on platforms ranging from Tabsera (multilingual virtual school in Somaliland) to BrainCert ($3M revenue WebRTC LMS) to Scholarly (15K+ users). Our agentic engineering pipeline lets us deliver toward the lower bound of every cost range above. If your roadmap depends on a platform that actually adapts, the next step is a 30-minute conversation.
Need a build-or-buy second opinion on adaptive learning?
Tell us your subject, learner profile, and target scale. We will hand back a topology, an algorithm choice, and a 2026 budget — whether or not we end up building it together.


.avif)

Comments