RamAI

An application assisting Suffolk University students by providing tools such as an AI chatbot and a library of 1200+ professors.

Check it out

Jan 15, 2026 — Jan 20, 2026

Project PageLive Demo

Tech Stack

TypeScriptTypeScriptReactReactPythonPythonData EngineeringData EngineeringAI IntegrationAI IntegrationAuditingAuditing

Development Blog Posts

Inception

January 15, 2026

Actually, this is the start of the blog for this project. I started planning the actual project a few days ago. I suppose the reason for starting this app in particular was because I was interested in creating my first AI integrated application. But I also wanted something that Suffolk University students could potentially use.

Right now, I've already started building the app, after a day or two of planning the architecture and determining the tech stack. I've always had the bad habit of creating the name and logo of applications early, but wow, that part is really fun.

But enough of that, here's what I've done so far:

  • Prototype AI chatbot. I've decided to use XiaomiMiMo's MiMo-V2-Flash. I like this model because it's quite fast despite being inexpensive. But also because anything above this is overkill for this project.
  • Discover page, along with the search bar feature, which is the main component of the page. You can search for professors by their name, and it also has a filters button with department filters such as Computer Science, Mathematics, and Business.
  • Professor profile page, which displays the professor's information such as their name, department, average rating, and recent reviews from students.

I'll continue working on this in my free time, and I'll post updates here as I make more progress.

— Montasir

Motive

January 16, 2026

One of my friends told me this project didn't make sense to create because Google's AI search results or ChatGPT "does the same thing and better.” I agreed, they had a point, but I also had one:

The existence of similar projects by billion dollar companies shouldn't deter you from working on your own. The goal, or my goal at least, isn't to compete with them. It's to learn, build, and ultimately have fun in my free time because it's been my passion to do so. It's the kind of mentality that I have when planning and building all of my projects, and it's what has allowed me to actually enjoy working on them and not get demotivated by the fact that there are similar projects out there.

For me, this project was always about growing as a software developer & engineer. Designing a system, integrating AI and working with real data, while also creating a general usage for students. And honestly, if even one person uses and benefits from it, that's good enough for me. Every project teaches you something, and for me, that's far more valuable than trying to measure yourself against tech giants.

— Montasir

Audit

January 17, 2026

This border color is nice.. anyways, lately I've been auditing my AI's backend instead of adding new features. I decided to halt feature work because I know that accuracy and trust matter a lot when it comes to RamAI's chatbot. If the AI gives messy information or is unable to respond to certain questions, users will quickly lose trust and stop using it.

I have an opinion that you may or may not agree with: an AI that confidently says incorrect stuff is worse than one that says "I don't know".

So, I researched and came up with questions that real users would realistically ask the chatbot. Then I asked those questions, recorded the responses, and analyzed them alongside a rating based on how well the AI responded. I wanted to see where the system actually breaks under real questions.

What surprised me most is how good the AI responded to many of the questions when the intent was clear. It's surreal to witness something you created work in a way like this. You honestly have to experience it if you haven't.

Of course, it wasn't all perfect. There were small bugs, like case sensitivity and prompt boundary issues, that caused some big perception problems. I also noticed I was asking the AI questions that aren't really represented in the data, and I'm still not sure how I want to handle those yet. There were also weird bracket artifacts leaking into responses, along with a JSON/syntax error tied to a specific (or maybe unspecific) intent. One response in particular was amusing to me, where the AI tried to be safe and helpful at the same time, then ended up contradicting itself. You can find that response at the end of this blog post in the promptAccuracy.md file, question 5, where I asked about student reviews on a professor.

Overall, there are strong protections against AI hallucinations in the backend. Department scoping is mostly working as intended. My ranking logic is producing genuinely useful results. Auditing all of this has also made debugging way easier than blindly guessing what to fix.

If you're looking to build a similiar AI system, let half the work be done by the prompt engineering for the planner and answer. The backend filters and constraints do the rest, based on the intents from the planner.

Below are the audit files in markdown which contain stuff like questions asked to the AI, notes and detailed analyses.

promptAccuracy.md

promptAccuracyFollowUp.md

— Montasir

Reflection

January 20, 2026

It's been a few days since my last post. Reason being is that I was wrapping the project up. Working on this has taught me a lot about building AI-integrated applications, with this project acting as the catalyst. From designing the architecture and UI layout, to backend filtering, auditing, and refining the AI's responses, each step ended up being a meaningful learning experience. It just made me realize how complex the actual AI apps we use daily are.

I also had a few friends try out the app. They liked it and said they genuinely found it useful. That feedback mattered more than I expected, because it helped me see the project from a user's perspective instead of just my own as the builder.

What stood out to me the most is how much trust and accuracy matter when working with AI. It's easy to make something that looks smart, but much harder to build something that users can actually rely on, which I'm still trying to achieve. That shift in mentality is something I'll carry forward.

I'm stepping away from active development of major features for now. I'll focus on maintenance and polishing it here and there instead. It's not because there's nothing left to improve, but I feel like this is a good stopping point. The system works, I've learned lessons, and I'm happy with what is is right now.

The live demo for the project is public if you're interested in checking it out. The UI is different because I migrated from React Native to a web app, which gave me more flexibility and easier deployment.

Visit it here:

ram-ai.vercel.app

— Montasir

Montasir Moyen - Full-Stack Software Developer & Engineer in Boston