← All Articles
AI Development5 min read

How I Built a Full AI-Powered App in a Weekend (And What I Learned)

Youness Haji

Youness Haji

March 15, 2025

I had a Saturday morning with no client work lined up. By Sunday night, I had a production AI app live on Vercel with real users. Here's exactly how I did it — and the 3 mistakes that almost derailed everything.

The Challenge: Idea to Ship in 48 Hours

The constraint was simple: idea → deployed product in one weekend. No half-finished prototypes. No "I'll polish it next week" excuses. Something real, live, and linkable.

The idea was a Canada citizenship test study companion — an app that uses AI to quiz immigrants preparing for their citizenship test, adapting to their knowledge gaps in real time.

Why this idea? It solved a real problem (I'd seen family members struggle with the test), the scope was manageable, and AI made it dramatically better than a static quiz app.

Tech Choices (and Why)

The stack I chose wasn't accidental:

Next.js 14 with App Router — SSG for static pages, Server Actions for form submissions, and API routes for AI calls. One repo handles everything.

Groq API — I'd been meaning to try Groq for a while. At the time, GPT-4o was adding 2-3 seconds of latency per question. Groq's inference speed made the quiz feel instant. More on this below.

Expo + React Native — Mobile-first was non-negotiable. People study for the citizenship test on their phones, not laptops.

Firebase — Authentication and real-time progress tracking. I knew the SDK well enough to move fast.

Vercel — Zero-config deployment. Critical for a 48-hour timeline.

Architecture in 20 Minutes

Here's the core architecture I sketched before writing a line of code:

User opens app → Quiz session created (Firebase) → Question fetched from question bank → User answers → AI evaluates answer + generates explanation (Groq) → Firebase updates user's knowledge profile → Next question weighted by weakness areas

The AI's job was narrow but critical: evaluate free-text answers with nuance (not just exact matches) and generate concise, encouraging explanations.

The Groq API call looked like this:

typescript
const completion = await groq.chat.completions.create({
  model: 'llama3-70b-8192',
  messages: [
    {
      role: 'system',
      content: `You are a Canadian citizenship test examiner. 
      Evaluate the user's answer, give a score (0-100), 
      and provide a brief encouraging explanation. 
      Return JSON: { score: number, explanation: string, correct: boolean }`,
    },
    {
      role: 'user',
      content: `Question: ${question}\nUser answer: ${userAnswer}`,
    },
  ],
  response_format: { type: 'json_object' },
});

Structured JSON output from the AI was a game-changer. No parsing, no guessing — just completion.choices[0].message.content parsed as JSON.

The 3 Mistakes That Cost Me 6 Hours

Mistake 1: Not Mocking the AI Early

I spent Saturday morning building the UI against the real Groq API. Rate limits hit fast. I burned 2 hours debugging issues that were actually just throttling.

Fix: Implement a mock AI response function first, test the UI completely, then swap in the real API. Keep the mock in __mocks__/ai.ts for fast iteration.

Mistake 2: Firebase Security Rules as an Afterthought

Sunday morning I realized my Firestore was wide open. Spent 90 minutes writing proper security rules before I could share the app publicly.

Fix: Write your security rules in the first hour. They're not optional and they're faster to write when you're thinking about the data model anyway.

Mistake 3: Not Budgeting for Vercel Environment Variables

The app worked locally. It crashed on deploy. The reason? I had 6 environment variables I'd forgotten to add to Vercel.

Fix: Create a DEPLOYMENT_CHECKLIST.md with a section: "Environment variables to set on Vercel before deploying." Takes 5 minutes, saves an hour.

Deployment Gotchas

Beyond the env vars, Vercel deployment surfaced two more issues:

Dynamic imports: I was importing a PDF parser that used Node.js fs module. This broke in the Edge runtime. Solution: move the route to runtime = 'nodejs' explicitly.

Image optimization: Next.js <Image> component requires explicit width and height or fill prop. I had 12 images missing these props. The build caught them all at once.

The Final Result

By Sunday at 10 PM, the app was live. By Monday morning, I shared it in two Facebook groups for newcomers to Canada. Within a week, 340 people had used it.

Was it perfect? No. The UI needed polish. The question bank had gaps. The AI occasionally gave explanations that were too long for mobile.

But it was real. It was live. It was helping people. That matters more than perfection.

What I'd Change

  1. Use Expo Router from day 1 — I used React Navigation and regretted it. Expo Router's file-based routing is faster to build with.
  2. Add analytics immediately — I didn't know which features people were actually using until day 3.
  3. Build the share flow earlier — The biggest growth driver was users sharing their quiz results. I almost didn't build this.

The Real Lesson

The bottleneck on weekend projects isn't time. It's scope discipline. I cut 4 features from my original plan:

  • PDF upload for study materials
  • Leaderboard system
  • Push notifications
  • Social sharing of score

Cutting these made the difference between "shipped" and "almost shipped."

Pick the one thing that makes your app worth opening. Build that perfectly. Ship everything else later.


Want to see the final app? Check out the Canada Citizenship Test project for the full case study. And if you're working on your own AI-powered project and want a second opinion, let's talk.