InfiniteCode logo

InfiniteCode

An application providing endless coding practice powered by curated challenges and unprecedented AI-generated problems.

Visit Live Demo

February 2026

Live Demo

Tech Stack

Development Blog Posts

Infinite

February 8, 2026 - 6:16PM

I'm starting a new project called InfiniteCode, a coding practice platform built around maximizing problem-solving skills.

Practicing questions that appear in technical interviews is valuable. Platforms like LeetCode are great, I use it to learn common patterns, strengthen data structures and algorithms, and get comfortable with interview style problems. But interviews don't always give you something you've seen before. They test how you think, meaning they might not hand you a problem you've solved or memorized before.

InfiniteCode is built around the idea of adaptability. Alongside classic challenges, the platform will generate new problems using AI so you're constantly solving things you haven't seen before. The goal is to build this adaptability and confidence when facing unfamiliar questions, not just recognition of known ones.

I hope to take longer on this project, since it is more ambitious than my previous projects. I'll be documenting the architecture, decisions, and progress as it comes together.

— Montasir

Architecture

February 10, 2026 - 11:53PM

The mission is quite simple: prepare users for the unexpected in coding interviews. But for that, InfiniteCode needs a clean editor, clear problems, and fast feedback. I'm keeping the architecture minimal on purpose for now so I can move efficiently and iterate without over-engineering.

The core UI should use a split layout like LeetCode, NeetCode, etc. Question on the left, code editor and output on the right. Of course, the well known splitter component should be used to resize panels. It's a familiar layout that would reduce the learning curve for users. I am choosing Monaco for the editor because it feels like Visual Studio Code and looks to already support syntax highlighting and language tooling. It's heavier than a basic textarea and less customizable, but worth it for user experience. Besides, I don't think most users would want a highly customized editor for a coding practice platform, so the tradeoff is reasonable. Check out Monaco here:

https://www.npmjs.com/package/@monaco-editor/react

I'll use OpenRouter SDK, still haven't decided on the AI model. The AI pipeline will have to handle problem generation and solution grading. Python retrieves classic problems from some public websites, but there's no way to really grade these problems without an AI. You'd need a lot of test cases for each problem to reliably grade them, and even then, there are edge cases that could slip through. I may not need an AI to grade solutions, but I haven't found a way to reliably test and grade code without it yet. But grading should be lightweight to avoid overwhelming users while still giving reasonable feedback. A simple prompt would suffice to check if the user's solution to the problem is correct, and maybe provide some hints if it's not.

Planning wise, I'm prioritizing a stable experience before adding more complexity like user accounts or rankings. Yes, social features will have to wait, it will help me nail the core loop first, which is: read -> solve -> test -> learn. I think this is the most important part to get right, and I can always build around it later. Overview:

  • Stack: Next.js + React, Typescript, Monaco editor, OpenRouter, Python for classic problem retrieval
  • Future Stack: PostgreSQL & Supabase for user data and storage, I want to explore Supabase for this project since I haven't used it before and it seems interesting
  • UI: shadcn, mvp: split view (question on left, editor + console on right)
  • Considerations: Limited functionality for Monaco but better user experience, AI used to grade solutions
  • Focus: Core loop first, extras (accounts, leaderboards) later

— Montasir

Assessment

February 12, 2026 - 11:01PM

While working on the grading pipeline for InfiniteCode, I've realized how powerful using AI to grade can be. I initially thought I could get away with just running the user's code against a set of test cases, but that approach has some major limitations.

The sheer size of the database of coding problems and their edge cases makes it impractical to maintain exhaustive test cases for every problem, especially as the database grows. Even if I did, there are always edge cases that could slip through, and users would be left wondering why their solution was marked wrong. With AI grading layered on top of deterministic test execution, there's more flexibility. For example, I made it so that the AI helps evaluate the solution and also provides some cases where the solution fails or succeeds, based on the results from actual test runs. One to three cases, just to give users a better understanding of where their solution stands and how it can be improved.

Using AI also allows for more refined feedback. Instead of just marking a solution as right or wrong, the AI can provide hints, suggest optimizations, or even point out specific lines of code that could be improved. Some platforms are beginning to explore AI-assisted feedback, but I think concrete example cases alongside that feedback could really enhance the learning experience for users.

Of course, there are drawbacks to using AI for grading. AI models can hallucinate, and this isn't new information. Just relying on AI for grading could lead to inaccurate feedback if the model isn't properly constrained or if it encounters a problem it doesn't understand well. To mitigate this, I will have to implement strong filtering and constraints in the backend, such as structured prompts, validation against real test results, and output filtering to ensure the AI's responses are as accurate and helpful as possible.

All in all, while using AI for grading introduces some complexity and potential downsides, I believe the benefits in terms of flexibility, refined feedback, and scalability outweigh the drawbacks when implemented thoughtfully.

Image 1

Green checks give me dopamine..

— Montasir

Versatility

February 22, 2026 - 1:01AM

It's been a while. I've been busy with University and career development, but I've been making progress on InfiniteCode in my free time.

So looking back, it looks like I left off talking about the grading aspect of the app, choosing AI for grading after weighing the tradeoffs. But I'm still realizing how powerful AI can be used in this platform. It has allowed me to add more versatility to the platform, which has opened up more use cases beyond just practicing interview questions.

For example, I've worked on adding a feature where users can grade their coding assignments. This is a common use case for students who want to get feedback on their code before submitting it officially. A student would just input their code into the built in code editor, and what that code is supposed to do, and the AI would handle it from there. It's especially good if the student feels they missed some test cases, or if they just want some feedback on how to optimize their code. An AI should tell you where your code is failing and give you some hints on how to fix it, which would be really useful for learning and improving coding skills.

Image 1
Image 1

Finished up the AI question generation feature as well. Solving those questions is a fun experience, and it's interesting to see the variety of questions it can come up with.

— Montasir

Persistence I

March 8, 2026 - 1:19AM

I've been looking forward to this part of the development, which involves moving from in-memory state to real persistence. Until now, features functioned correctly in the UI, but data wasn't persisted when the page was closed. Supabase changed that, tt gave me storage, auth, and a PostgreSQL backend in one place, so I could focus more on product behavior and less on wiring up infrastructure from scratch.

I had a bit of prior experience with PostgreSQL, but Supabase was new for me. The learning curve was honestly smooth. The dashboard made it easy to inspect tables, test queries, and verify data while building. For this project, relational structure matters. Users, generated questions, and saved question records all map naturally to Postgres tables, and that made the data model feel clean and predictable.

Authentication was the next step. Once auth was in place, features became user-specific in a meaningful way. A signed-in user can save generated questions and come back later to load them, while anonymous users still get the core experience but without persistence. That line between “try it” and “personal workspace” is small in code, but huge in UX.

One detail I cared about was data safety in the UI flow. If a user is about to load another saved question or click “New Question” while their current generated question hasn't been saved, the app now warns them first. It sounds simple, but these guardrails matter. They prevent accidental data loss and make the app feel reliable instead of fragile.

Overall, this stage made the app feel real. Persistence, authentication, and a structured database turned temporary interactions into durable user progress. There's still plenty to build, but this foundation is solid.

Image 1

— Montasir

Persistence II

March 10, 2026 - 2:04AM

On the last blog post, I focused on saving AI generated questions with Supabase and PostgreSQL. Storing the problem itself solved one half of the problem. But while using the app, I noticed the bigger pain point was actually user progress. People switch tabs, refresh, change language, or come back later. If their code disappears, trust disappears too. So recently I focused on saving editor code drafts and tracking completed questions in a way that feels fast for users but still clean and affordable on the backend.

My first decision was local draft saving. I added autosave to localStorage with a debounce, keyed by user + question + language + variant. That means a Python draft for Question A does not overwrite a Java draft for Question A, and neither affects Question B. Local save is instant and free, so it gives a great UX baseline. Then I layered cloud persistence with Supabase for authenticated users. I only sync remotely when it matters (manual save, run/submit/grade, periodic dirty sync), not every keystroke. This reduces write volume and avoids turning autosave into a cost problem.

I had to make tradeoffs around consistency vs cost. Full real-time sync on every edit sounds nice, but it is expensive and noisy. So I used hash based checks to skip unchanged saves and added cooldown/rate limiting on manual draft saves. I also introduced conflict resolution between local and remote drafts using updated timestamps, choosing the newer draft as source of truth on load. For a single user coding workflow, this is simple, reliable, and easy. In short, I optimize for practical durability, not theoretical perfection.

The second feature was completion tracking after successful submit. I created a completed questions table and marked questions as completed only after a valid submission event, not on run/simulate. This avoids false positives and keeps progress meaningful. For generated AI questions, I used stable question keys (content based hashing fallback) so progress and drafts still map correctly even when IDs are temporary. The result is a smoother loop: solve -> submit -> completion recorded -> progress reflected in UI. This also sets up future features like streaks and personalized recommendations.

All in all, I was working less on adding flashy features and more on product reliability. My core principle was “save often locally, sync smartly remotely”. That gave me a strong UX, lower backend cost, and cleaner data semantics for progress tracking.

Image 1

— Montasir

Live

March 29, 2026 - 5:14PM

InfiniteCode is now live!

https://infinitecodex.xyz

After a few months of development, the core experience is ready for users. I'm happy with how the core loop turned out, and I think the AI grading and generation features add a lot of value and versatility to the platform.

This is just the beginning, there are plenty of features and improvements I want to add, but I'm excited to share it in its current state. The database of classic problems will grow, and the AI grading and generation will continue to be refined based on user feedback and my own observations.

Image 1

— Montasir

End

Latest

You are on the latest blog

Next

RamAI

January 2026