
Key takeaways
• ALDA is AI for curriculum design. An e-learning product that helps US colleges and universities build courses for ~500,000 underserved students — faster and cheaper than the traditional instructional-design cycle.
• Custom beats off-the-shelf for AI products. Michael Feldstein compared large US firms and individual contractors before picking Fora Soft for the right balance of professionalism and agility.
• Agile is the AI survival kit. ChatGPT’s model behavior changed twice during the first iteration; the team absorbed it because the process was built for change.
• “Everything on schedule, close to budget, release one ready for customers.” Michael’s direct words after the first delivery — and the reason he kept us through release two.
• 5/5 rating across professionalism, engagement and communication. “Yes, and yes” to future projects and recommendations.
Why Fora Soft shared this conversation
Most agency case studies are PR. This one is closer to a founder’s raw diary on what actually happens when you commission a custom AI-powered product for a demanding, regulated domain. Michael Feldstein runs E-literate and has shipped educational technology to millions of students; he has worked at the third-largest US textbook publisher and co-founded edtech startups. He chose Fora Soft for ALDA — an AI-powered curriculum tool for US colleges — after interviewing a large local firm and several mid-size shops our size.
We published this interview for three reasons. First, it is an honest account of how an experienced edtech buyer evaluates an offshore engineering partner. Second, it is a real look at the engineering rhythm of shipping an AI product in 2024–2026 — including what happened when ChatGPT’s behavior changed mid-sprint. Third, it documents the working agreement that makes this kind of engagement land: agile, high-trust, no over-specified “purple-button” requirements.
If you are building an AI-powered product for education, healthcare, or any other domain where AI behavior is non-deterministic and the stakes are real, the conversation below will feel familiar.
Scoping a custom AI product?
30 minutes with us — we will pressure-test scope, model choice, delivery cadence and the realistic first-release timeline against your business goal.
About ALDA and the problem it solves
ALDA is an AI-powered e-learning application designed to help colleges and universities create study curricula faster and more economically. The consortium of schools behind the pilot serves roughly half a million students, many of them first-generation, from underserved urban communities where career skills, job markets and the tools of education are all moving faster than traditional curriculum committees can keep up with.
The program runs as a six-month design-build workshop series in which participating institutions work with ALDA to co-develop curriculum using AI assistance. The bet is that an AI tool, done right, can compress the curriculum design cycle from quarters to weeks while keeping instructional-design quality high. The risk — and the reason custom development beat an off-the-shelf tool here — is that every institution’s context and every student cohort’s needs differ.
Meet Michael Feldstein, ALDA CEO
Valeria: Hi, Michael. Thank you for joining me today. Before we start, could you introduce yourself and tell me a few words about your project?
Michael: It’s my pleasure, Valeria. My name is Michael Feldstein. I run a company called E-literate. We work primarily in the United States, primarily with colleges and universities, helping them use technology to improve education. I have a long background in educational technology — I’ve worked at the third-largest textbook publisher in the country, helping develop their flagship teaching software that serves millions of students every year. I also worked at Oracle, and I co-founded a startup. So I have a fair bit of experience developing educational technology.
Valeria: Your project, as I understand it, is an AI-powered tool to improve the way university curricula are created, right?
Michael: Yes, and this is very important. We’re working with a group of colleges and universities, many of which serve underserved students, first-time college students, and students in poor urban environments. It’s a mix of schools that serve a total of about half a million students. In an environment where skills are changing, AI is changing and jobs are changing, it’s very important to develop new courses that fit these students’ needs and background. That is very difficult for colleges and universities of all sizes. We’re testing through a project working with these institutions — a six-month design-build workshop series in which we’re going to build an AI application together to test whether artificial intelligence can help them develop courses more quickly and economically.
How Michael chose Fora Soft
Valeria: Could you describe your first interactions with the Fora Soft team? What were your first impressions?
Michael: My first impressions from the business conversations before we got started were very, very positive. I interviewed a few firms ranging from a large firm that I work with frequently in the United States — very good, but costly — to individual contributors, to a couple of firms roughly Fora Soft’s size. I was impressed from the beginning with the professionalism of your organization, the way your leader helped define the project, explain expectations, help me navigate the software we needed to begin the engagement, and outline a familiar professional process.
Michael, on picking Fora Soft: “I was impressed from the beginning with the professionalism of your organization, the way in which your leader helped define the project, explain expectations, help me navigate the software that we needed to begin the engagement.”
Expectations vs. reality — how progress looked
Valeria: Over the time we’ve worked together, what do you think about the progress of your project so far?
Michael: I’m very pleased with the first release, which we are testing right now. It’s almost ready for my customers to see in a few weeks, and I think we’ve made great progress. We’ve already started working in parallel on the second release. The team has done a great job of staying close to budget and schedule, keeping me informed, adjusting with me as we learn new things and get new ideas, and suggesting new ideas about how to make the project better.
Valeria: You just named three right factors to evaluate any development team. Have there been notable differences between your expectations before development and the actual progress of the work?
Michael: This is actually remarkable. I would say that my expectations were pretty close to what I got. Usually, you’re disappointed. If anything, the work has progressed more rapidly than I expected. I tend to be conservative because software development is hard and usually runs into problems that you can’t solve quickly. With Fora Soft, problems have been small enough and handled well enough that we haven’t lost time to them.
What actually works in the partnership
Valeria: What three things have you liked the most about Fora Soft so far?
Michael: I would say that Fora Soft practices agile software development in a real way. Lots of firms say they do — very few do. You have a good understanding of the right level of documentation for what we’re working on, the importance of communication and the right cadence of meetings, and how those different practices fit together. My experience with Fora Soft has been that it works as a very accomplished agile software development shop, which is exactly what I needed.
Michael, on agile done right: “Fora Soft works as a very accomplished agile software development shop, which is exactly what I needed.”
Valeria: With your background in educational software, how well do we communicate technical details and development specifics to clients?
Michael: The quality of communication the client brings to the engagement matters. I have to make sure I’m telling the developers enough about why a feature is needed so they can bring their creativity and say, “Well, we can’t build it that way, but if you want that, we can do it a better way.” At the same time, I need to avoid over-specifying — I shouldn’t be telling you I need a purple button that does exactly this, exactly here. You’re professionals; you should bring your skills to the table. I also need to communicate priorities and interact with suggestions — what to do first, second, third, and why. As the project continues, our collaboration improves. That’s exactly what has happened with Fora Soft.
Want a partner that does agile for real?
Book a 30-minute scoping call and we will discuss your AI product, the right engagement shape and a realistic first-release plan.
Tricky challenges in building an AI product
Valeria: Have there been tricky challenges during development?
Michael: Oh, it’s not fun unless there are tricky challenges. AI is inherently tricky — it’s software designed to be a little unpredictable. It evolves all the time, and we don’t exactly know how it will behave when we do certain things with it. There have been changes twice just since we started developing the first iteration — ChatGPT changed in ways that actually helped us. They could have gone the other way, but we’ve been lucky both times. This is again why agile is important, and why working with a shop that understands agile matters. There are always challenges. If you do the project right, those challenges become opportunities.
Valeria: And do you feel we’re doing the project right?
Michael: As I said, I’m very happy with the first iteration and very pleased with our progress towards the second. I would say Fora Soft is doing very well.
The scorecard — a clean 5/5
Valeria: On a scale of one to five, how would you rate our performance, including professionalism, engagement and communication?
Michael: Five.
| Dimension | Michael’s rating | Evidence |
|---|---|---|
| Professionalism | 5 / 5 | Structured scoping, onboarding, and engagement process |
| Engagement | 5 / 5 | Cared for, listened to, responded to — not treated as a paycheck |
| Communication | 5 / 5 | Right documentation weight, right meeting cadence, suggestions welcome |
| Budget / schedule | On | “Great job of staying close to budget and schedule” |
What Michael would change about working with us
Valeria: Is there anything you wish Fora Soft would improve?
Michael: Not really. Every company has a sweet spot and a fit for a particular type of work. I was looking — since this is a minimum viable product and I was getting a lot of customer feedback — for a company that can strike a balance between good project-management practices and not being too heavy-weight or process-bound. Fora Soft has been perfect for me for that.
What we actually built for ALDA
Michael’s framing focuses on process. For buyers evaluating a similar engagement, a few technical details matter.
Product shape. A web-based AI assistant embedded in the curriculum-design workflow. Institutional instructional designers, department chairs and faculty review AI-generated drafts, adjust, and export to their LMS. The UI has to be familiar to non-technical educators, while the AI layer does the heavy reasoning behind the scenes.
AI layer. Built on top of OpenAI (with provider-agnostic design so we can swap or blend models). Heavy investment in prompt design, evaluation harnesses and regression tests — because model behavior changed twice during release one, and will change again. Every prompt has a versioned template, expected-output spec and automated eval that ran before any release.
Delivery cadence. Two-week iterations, weekly client sync, parallel release tracks after release one. Lightweight documentation — enough to keep the team aligned, not enough to slow delivery. Budget and timeline are tracked on the same board the client sees.
What makes it hard. Curriculum is a sensitive domain — wrong output is not just embarrassing, it disadvantages real students. The team built explicit human-in-the-loop review gates, input validation for prompts, and a mechanism for faculty to flag low-quality AI output that feeds back into prompt refinement.
Lessons for anyone commissioning a custom AI product
1. Hire a shop that takes AI non-determinism seriously. The cheapest vendor will treat your AI feature like any other API integration and ship brittle prompts. The right partner invests in eval harnesses and versioning from day one — because models change.
2. Bring the “why”, not the “how”. Michael’s words: do not over-specify the purple button; explain what your user needs it to achieve. Lets the team bring creativity and propose better solutions than the spec would have forced.
3. Pick a team that sweats the right size. Large consultancies over-process MVPs. Individual contractors under-process. The right size is the one that matches your stage — for ALDA, a mid-size shop with real process discipline but no bureaucratic overhead.
4. Insist on visible progress. Weekly demos, shared boards, budget transparency. If you cannot see the burn down in real time, you will get surprised late.
5. Work with people who treat you like a person. Michael’s most pointed comment: “It’s very easy in this world of contract software development to get a vendor who just treats you like, ‘Here’s the code – give us a paycheck.’” Pick a team whose humans show up.
On values, trust and the non-commodity parts of software
Valeria: Is there anything else important to you in a project like this?
Michael: It’s very important to me that I work with a firm where the people I’m dealing with are concerned about me as the customer and are honest with me about things that are going well and things that are not. It’s very easy in this world of contract software development to get a vendor who just treats you like, “Here’s the code – give us a paycheck.” I’ve been very pleased with every single Fora Soft employee that I’ve interacted with. I felt cared for, listened to, and responded to, and that’s unusual.
Michael, on trust: “I’ve been very pleased with every single Fora Soft employee that I’ve interacted with. I felt cared for, listened to, and responded to, and that’s unusual.”
Would you recommend Fora Soft?
Valeria: Would you collaborate with Fora Soft on future projects? Or recommend us to others?
Michael: Yes, and yes.
Valeria: Anything else you’d like to share about your experience with Fora Soft?
Michael: I recommend the company. I think you deliver good software practices and creative collaborative work at a good price. So I really can’t think of anything negative to say.
Michael, on the recommendation: “I recommend the company. I think you deliver good software practices and creative collaborative work at a good price.”
Watch the full conversation with Michael on our YouTube channel →
Why custom AI beats off-the-shelf in regulated domains
ChatGPT wrappers look attractive until the domain gets specific. For ALDA, three reasons custom won the day.
Context-specific prompting. General-purpose AI tools don’t know what “a curriculum for a Gen-1 community-college student in applied logistics” really means. Domain-aware prompts, fine-tunes and retrieval layers do.
Evaluation you can defend. Off-the-shelf tools give you no evaluation harness. Custom builds let you measure the outputs against instructional-design standards before anything reaches a student.
Data boundaries. Education data carries FERPA obligations. Off-the-shelf products often send prompts to third-party training pipelines; custom builds let the institution keep PII inside its own perimeter.
Reach for a custom AI build when: outputs have real stakes, the domain has regulatory constraints, or you need to defend the evaluation methodology to stakeholders.
The buyer’s playbook we apply to AI engagements
Distilled from the ALDA engagement and adjacent AI projects we have shipped.
| Phase | Duration | What we deliver |
|---|---|---|
| 1. Scope & model selection | 1–2 weeks | Business goal, user workflow, model shortlist, eval definitions |
| 2. Prompt / eval harness | 2–3 weeks | Versioned prompts, eval suite, regression tests, human-in-the-loop checkpoints |
| 3. Release 1 build | 8–12 weeks | Working app, end-to-end data flow, observability, deploy pipeline |
| 4. Customer beta | 2–4 weeks | Feedback loop, prompt/UX iteration, production readiness |
| 5. Release 2 & beyond | Parallel tracks | Feature growth, model upgrades, monitoring, QA on release-one in parallel |
Ready to scope a similar AI-powered app?
30 minutes to walk your use case, evaluation goals and release one. No slides, just decisions.
A decision framework — is a custom AI build right for you?
Q1. How specific is your domain? If a general-purpose AI tool already does 80% of what you need, a custom build is hard to justify. If the last 20% is where the value lives — regulated outputs, domain knowledge, stakeholder trust — custom wins.
Q2. Can you defend the evaluation methodology? If stakeholders (customers, regulators, boards) will ask “how do you know the AI is right?”, you need an eval harness, which means a custom build.
Q3. Is data sensitive enough that you cannot send it to a third party? If yes — HIPAA, FERPA, PCI, export-controlled data — off-the-shelf wrappers are out.
Q4. How fast do you need the first release? Below 8 weeks, assemble off-the-shelf. 3–5 months is the custom-build sweet spot. More than 9 months usually means scope is too large for MVP — split it.
Q5. Are you willing to evaluate vendors by their AI discipline, not just their resume? Most shops can ship a web app. Few ship AI products responsibly. Ask every shortlisted partner how they version prompts, run evals and handle model drift.
Five pitfalls in custom AI engagements
1. Treating prompts as throwaway. Prompts need versioning, tests and regression suites like any other code. Shops that do not deliver this will not survive their first model upgrade.
2. No human-in-the-loop gates. AI products in regulated domains need explicit human review on sensitive outputs. Retrofitting this is painful; design it in from day one.
3. Picking the model before the evals. Different models suit different domains. Ship an eval harness first, run candidate models through it, then commit to one.
4. Ignoring model drift. Foundation models change. Fora Soft saw ChatGPT behavior shift twice during ALDA release one. Build monitoring that alerts on eval regressions.
5. Over-specifying the UI. “I need a purple button here” is Michael’s prototypical wrong brief. Specify outcomes, not pixels; let your team bring product judgment.
FAQ
What kind of AI products does Fora Soft build?
LLM-powered applications in education, healthcare, media and real-time video, plus RAG-style domain assistants, multimodal AI agents for live workflows, computer-vision products, and AI features embedded into custom SaaS. Recent examples include Scholarly and AI textbook creation tooling.
How long does a custom AI MVP take?
Usually 12–16 weeks from kick-off to customer beta for a focused MVP, plus 2–4 weeks of beta iteration before production. ALDA release one fit that envelope. Larger scopes we ship in parallel-release mode so the business starts seeing value before release two exists.
What does “agile done right” mean in an AI project?
Short iterations, right-sized documentation, demos every two weeks, a shared board with budget/timeline visible to the client, and the willingness to re-plan when the foundation model changes. Michael’s words: “the right level of documentation, the importance of communication, and the right cadence of meetings.”
How do you handle model changes during a project?
Every prompt is versioned and regression-tested. When a provider’s model changes, the eval suite catches drift before it reaches production. If the new model behavior helps (as it did twice on ALDA), we absorb it; if it hurts, we pin the version or switch model.
Is Fora Soft a good fit for MVP-stage startups?
Yes. Michael’s comment applies: we strike a balance between good project-management practices and not being too heavy-weight or process-bound. We staff the team to the stage of the company, not the other way round.
How does Fora Soft handle FERPA / HIPAA / other data-sensitive AI builds?
By default we keep sensitive data inside the client’s perimeter, avoid sending PII to third-party training pipelines, and use enterprise API tiers with opt-out on data retention. For regulated clients we run the evaluation harness against de-identified test data and run full audits before go-live.
What should I prepare before the first conversation?
Three things: the business outcome the AI should produce, the user workflow it sits inside, and the constraints (budget, regulatory, schedule). You do not need a tech spec. We will walk the rest with you on the call.
Where can I watch the full Michael interview?
On our YouTube channel — the full conversation is here. Video runs under 15 minutes.
What to Read Next
AI + Education
Leveraging AI for Modern Textbook Creation
A companion piece on AI-assisted content creation for education.
Case study
Scholarly: AI-Powered Learning Platform
Another AI-for-education product we shipped, end to end.
Client review
Jan, AppyBee Founder, on Custom Software Development
Another founder’s honest account of working with Fora Soft on a custom app.
Client review
Jesse, Vodeo CEO, on Custom App Development Services
A third founder’s experience — streaming-app edition.
Services
Custom Software Development
The services page behind what Michael hired us for.
Ready to ship a custom AI product on schedule?
Michael picked Fora Soft over larger US firms and individual contractors because the shape of our engagement fits AI product work: real agile, the right documentation weight, fast meeting cadence, prompt discipline and an honest team. Two releases in, his ratings are 5/5 across professionalism, engagement and communication, and his answer to “would you recommend us?” is “yes, and yes.”
If you are scoping a custom AI product — education, healthcare, media, or anywhere else AI outputs carry real stakes — we will walk the shape of the engagement, model choice, eval strategy and first-release plan in a 30-minute call. No slides, just decisions.
Let’s scope your AI product
30 minutes with Vadim — your business goal, the right model choice, and an honest first-release plan.


.avif)

Comments