Pursuit entered 2025 with one guiding principle: evolve or die. We knew that to continue to generate life-changing outcomes, we’d have to rebuild from the ground up to be AI-Native. And this meant more than transforming our training—it meant transforming Pursuit itself.
So, starting on January 15, 2025, we began changing the way we operate as an organization.
We set clear expectations with staff:
We rethink priors
We all build software
We move quickly and iterate
Twelve months later, we’ve cut our SaaS spend in half, built in-house workforce development software that evolves with the job market, and collected data that helps us operate better and land more jobs for the community we serve.

A slide from our January 2025 kickoff All-Staff
We learned:
Experimentation requires ruthlessly protected time - without it, daily urgency always wins
When domain experts become builders, everything accelerates
The technology is easy, the transformation is hard
Custom software raises the bar for what we'll accept from vendors

At Pursuit, 'Everyone Builds': our IRL PRD kanban with products and features contributed by the whole team
The first thing we built in service of AI-Native transformation were 'Build Days'. When we announced that once a week, staff from all departments would clear their calendars to build with AI, reception was mixed—some worried it would derail urgent work. Others doubted AI’s value.
But over the weeks, we all began to see what was possible. What started with personal projects—beat-matching software for our DJ colleague, a bedtime story generator for the new dad—eventually became structured and strategic. We built early prototypes including a self check-in app to track Builder attendance and a volunteer management tool. We created an intake process to channel builds toward organizational priorities. We started building in teams. We all learned what a good PRD looks like.
We are dogfooding our own mission: teaching ourselves to build with AI the same way we teach our Builders. By year's end, we'd effectively added 14 contributing engineers to our team—not by hiring, but by upskilling the staff already here.
Now, 78% of non-technical staff have contributed code to our production-level apps.
We learned:
It has to be fun before it’s productive
Cultural transformation is harder than skill development
Experimentation dies without protected time & permission to fail
We’re starting to see demand to expand Build Days to partner orgs and have launched pilots with Neighborhood Trust and MetLife. Participants entered with minimal AI knowledge and left with real products they had built themselves.

We dedicated a full day each week to experiment with AI, build, and share learnings as a team
After a few months of Build Days, we were ready to test what we'd been learning. When our admissions software came up for renewal, we made our first org-wide bet: could we replace it with a 'vibe-coded' product built by non-technical staff?
We put together a team, defined requirements, and got to building. Four weeks later, we shipped a fully operational admissions portal that replaced our existing vendors and added capabilities they never offered:
An intelligent agent that evaluates incoming applications and recommends next steps.
An automated email system that manages the funnel and improves applicant communication.
An in-app onboarding tool that saves hours of valuable time on Day 1.
Were there bugs and glitches at launch? Absolutely. But our bet paid off. We canceled licenses for three SaaS products, significantly sped up our recruitment time, and launched four cohorts of high-potential learners in just ten months.
More importantly, we'd proven the model: non-technical staff could build production software. This proof point energized the team, and our “Everyone Builds” culture was cemented.
We learned:
Custom software raises the bar for what we'll accept from vendors—we now evaluate SaaS based on whether we could build it ourselves.
Merge conflicts teach collaboration faster than any workshop.
Collaborative building taught us what alignment actually means—not nodding at requirements, but genuinely understanding who needs what and why.

Our new admissions software not only speeds up our operations, it now streamlines data that was previously housed in different platforms
Last year, we radically changed our offering: both what we teach and how we teach it. We scrapped our traditional software development curriculum and shifted to a new model focused on teaching our community how to build software with AI.
The insight was simple: to learn AI, you have to use AI. Our builders have each spent ~1,000 hours building with AI throughout the program—not learning about AI through lectures, but developing the muscle memory and intuition that comes from daily practice. This makes them superusers in a market where many "AI-trained" developers have spent a few weekends with ChatGPT. Being early movers gives our community a genuine first-mover advantage.
But here's what made it possible: Our team built the training platform themselves. The same people who'd spent years in our classrooms—understanding exactly where learners get stuck, what tone resonates, what sequencing works—could now encode those decades of institutional knowledge directly into software.
The Builder platform contains:
AI Coach: Delivers content and amplifies the in-person learning experience, infused with Pursuit's training philosophy.
Curriculum Generation: Rapidly updates course materials to stay flexible with the job market.
Feedback Agents: Track progress and accelerate growth with personalized guidance.
Our team became experts in context engineering, designing the prompts, system instructions, and conversation structures that make the AI Coach feel like Pursuit. We encoded our training philosophy and pedagogical approaches into every interaction.
The technical challenge came next: ensuring reliability at scale. When you're running non-deterministic agentic tools in production, traditional testing breaks down—you can't check if the agent gave the "right" response because there are infinite valid responses. Our solution: AI-powered evaluation frameworks that assess quality and adherence rather than correctness. We use AI to judge whether responses are helpful, appropriately toned, and pedagogically sound across thousands of interactions.
As the LLM competition accelerates, new models launch constantly and capabilities shift overnight. We maintain a multi-model approach to both building and learning, powered by OpenRouter. This gives Builders and staff open access to the latest frontier models as they launch, creating a living laboratory where we collect data on model usage and performance. It lets us rapidly experiment and adapt curriculum as the landscape shifts, and helps our Builders and staff become sophisticated users with informed preferences.
We learned:
When the people closest to the problem can build the solution, iteration cycles collapse.
1,000 hours of building with AI creates superusers, and early movers have a genuine competitive advantage in a market just waking up to this skill set.
Building production AI is less about getting agents to work and more about keeping them working reliably, which requires entirely new approaches to testing and evaluation.
AI doesn't replace human instruction—it amplifies it by handling content delivery so teachers focus on community building, teamwork, and the human interventions that drive outcomes.

The AI Coach powers training on the Pursuit Platform
Our coaches have started to think like product managers. They don’t just want to teach better—they want to measure what’s working and iterate on it daily. So they built a data layer themselves. Our staff designed dashboards that surface the insights they need in real-time.
The result is a teaching practice driven by evidence:
Unique AI Conversation Data: We capture the learner's natural reasoning process (the full history of questions and attempts from the AI Coach). This data diagnoses the exact point of confusion and identifies micro-bottlenecks better than traditional learning data.
Real-Time Intervention Dashboard: Staff see learning activity and concept acquisition data, identifying who needs help and who's ready to help others—then act on it immediately.
Continuous Curriculum Refinement: Aggregated conversation data dictates program evolution, allowing us to adapt curriculum to the pace of learning.
We learned:
The best learning data captures process, not just outcomes.
The hardest part wasn't building the infrastructure, it was changing how decisions get made—moving from intuition to evidence required deliberate practice.

The data powering and collected by our AI-native transformation
Our goal is to connect AI-native Builders with high-value jobs in a labor market that's evolving faster than job titles can keep up. "Prompt engineering," the hot skill 6 months ago, is now table stakes. We're collecting data from everywhere we can (job descriptions, conversations with hiring partners, intel from staff and industry leaders), attempting to capture early market signals in the noise.
We aim to use these signals to identify evergreen skills that transfer across future roles, then help builders leapfrog traditional hiring through real projects and targeted networking.
We built three tools to test this:
Pathfinder: How our builders transform their job search into a strategic campaign. It moves beyond applications by serving as a networking and portfolio tool, encouraging builders to actively build for the job they want by logging projects and contacts aligned with target roles.
Lookbook: Our public-facing directory that shares the unique skills of our builders directly with hiring partners. It showcases detailed profiles, completed AI projects, and granular skill sets.
Sputnik: The internal CRM staff use to log employer outreach, job leads, and responses—mapping the network of relationships we're building with hiring partners. It gives us a complete view of what our team is doing to place builders and what's actually working.
We learned:
The market is shifting too quickly to train for specific roles—the best thing we can teach is how to 'learn to learn' and adapt quickly.
Employers are still figuring out what they need from AI-native talent—we're all defining this category together.
We're testing whether we can spot market shifts fast enough to shape training and placement in real-time. Our pilot cohort has just entered the job search phase of our program. 2026 will tell us whether our data compounds into genuine market intelligence or just stays noise.

The Pursuit Lookbook provides 'proof of work' and highlights the apps Builders have created for real-world clients
Last year proved we could transform how we operate. This year we're focusing on jobs, and achieving product-market fit for our Builders.
We'll keep experimenting, building in public, and betting on the people closest to the problems to solve them.
A lot is still uncertain, but we know one thing: evolve or die wasn't just a 2025 mandate—it's our culture now.




