InfiniteCode

Endless coding practice powered by curated challenges and unprecedented AI-generated problems.

Feb 8, 2026 — Present

Tech Stack

Development Blog Posts

Infinite

February 8, 2026 - 6:16PM

I'm starting a new project called InfiniteCode, a coding practice platform built around maximizing problem-solving skills.

Practicing questions that appear in technical interviews is valuable. Platforms like LeetCode are great, I use it to learn common patterns, strengthen data structures and algorithms, and get comfortable with interview style problems. But interviews don't always give you something you've seen before. They test how you think, meaning they might not hand you a problem you've solved or memorized before.

InfiniteCode is built around the idea of adaptability. Alongside classic challenges, the platform will generate new problems using AI so you're constantly solving things you haven't seen before. The goal is to build this adaptability and confidence when facing unfamiliar questions, not just recognition of known ones.

I hope to take longer on this project, since it is more ambitious than my previous projects. I'll be documenting the architecture, decisions, and progress as it comes together.

— Montasir

Architecture

February 10, 2026 - 11:53PM

The mission is quite simple: prepare users for the unexpected in coding interviews. But for that, InfiniteCode needs a clean editor, clear problems, and fast feedback. I'm keeping the architecture minimal on purpose for now so I can move efficiently and iterate without over-engineering.

The core UI should use a split layout like LeetCode, NeetCode, etc. Question on the left, code editor and output on the right. Of course, the well known splitter component should be used to resize panels. It's a familiar layout that would reduce the learning curve for users. I am choosing Monaco for the editor because it feels like Visual Studio Code and looks to already support syntax highlighting and language tooling. It's heavier than a basic textarea and less customizable, but worth it for user experience. Besides, I don't think most users would want a highly customized editor for a coding practice platform, so the tradeoff is reasonable. Check out Monaco here:

https://www.npmjs.com/package/@monaco-editor/react

I'll use OpenRouter SDK, still haven't decided on the AI model. The AI pipeline will have to handle problem generation and solution grading. Python retrieves classic problems from some public websites, but there's no way to really grade these problems without an AI. You'd need a lot of test cases for each problem to reliably grade them, and even then, there are edge cases that could slip through. I may not need an AI to grade solutions, but I haven't found a way to reliably test and grade code without it yet. But grading should be lightweight to avoid overwhelming users while still giving reasonable feedback. A simple prompt would suffice to check if the user's solution to the problem is correct, and maybe provide some hints if it's not.

Planning wise, I'm prioritizing a stable experience before adding more complexity like user accounts or rankings. Yes, social features will have to wait, it will help me nail the core loop first, which is: read -> solve -> test -> learn. I think this is the most important part to get right, and I can always build around it later. Overview:

  • Stack: Next.js + React, Typescript, Monaco editor, OpenRouter, Python for classic problem retrieval
  • Future Stack: PostgreSQL & Supabase for user data and storage, I want to explore Supabase for this project since I haven't used it before and it seems interesting
  • UI: shadcn, mvp: split view (question on left, editor + console on right)
  • Considerations: Limited functionality for Monaco but better user experience, AI used to grade solutions
  • Focus: Core loop first, extras (accounts, leaderboards) later

— Montasir

Assessment

February 12, 2026 - 11:01PM

While working on the grading pipeline for InfiniteCode, I've realized how powerful using AI to grade can be. I initially thought I could get away with just running the user's code against a set of test cases, but that approach has some major limitations.

The sheer size of the database of coding problems and their edge cases makes it impractical to maintain exhaustive test cases for every problem, especially as the database grows. Even if I did, there are always edge cases that could slip through, and users would be left wondering why their solution was marked wrong. With AI grading layered on top of deterministic test execution, there's more flexibility. For example, I made it so that the AI helps evaluate the solution and also provides some cases where the solution fails or succeeds, based on the results from actual test runs. One to three cases, just to give users a better understanding of where their solution stands and how it can be improved.

Using AI also allows for more refined feedback. Instead of just marking a solution as right or wrong, the AI can provide hints, suggest optimizations, or even point out specific lines of code that could be improved. Some platforms are beginning to explore AI-assisted feedback, but I think concrete example cases alongside that feedback could really enhance the learning experience for users.

Of course, there are drawbacks to using AI for grading. AI models can hallucinate, and this isn't new information. Just relying on AI for grading could lead to inaccurate feedback if the model isn't properly constrained or if it encounters a problem it doesn't understand well. To mitigate this, I will have to implement strong filtering and constraints in the backend, such as structured prompts, validation against real test results, and output filtering to ensure the AI's responses are as accurate and helpful as possible.

All in all, while using AI for grading introduces some complexity and potential downsides, I believe the benefits in terms of flexibility, refined feedback, and scalability outweigh the drawbacks when implemented thoughtfully.

Image 1

Green checks give me dopamine..

— Montasir

Versatility

February 22, 2026 - 1:01AM

It's been a while. I've been busy with University and career development, but I've been making progress on InfiniteCode in my free time.

So looking back, it looks like I left off talking about the grading aspect of the app, choosing AI for grading after weighing the tradeoffs. But I'm still realizing how powerful AI can be used in this platform. It has allowed me to add more versatility to the platform, which has opened up more use cases beyond just practicing interview questions.

For example, I've worked on adding a feature where users can grade their coding assignments. This is a common use case for students who want to get feedback on their code before submitting it officially. A student would just input their code into the built in code editor, and what that code is supposed to do, and the AI would handle it from there. It's especially good if the student feels they missed some test cases, or if they just want some feedback on how to optimize their code. An AI should tell you where your code is failing and give you some hints on how to fix it, which would be really useful for learning and improving coding skills.

Image 1
Image 1

Finished up the AI question generation feature as well. Solving those questions is a fun experience, and it's interesting to see the variety of questions it can come up with.

— Montasir

Montasir Moyen - Full-Stack Software Developer & Engineer in Boston