(And Why Every AI Enthusiast Should Start with Data)
I started my B.Tech in AI & ML because, honestly, AI was the buzzword of the decade. I was already into computers and robotics, so it felt like the natural choice. But reality hit fast: my first-year syllabus covered zero AI. No linear algebra, no Python — just basic programming and physics.
So, I did what any curious student would do: I started learning AI on my own (but skipped the math fundamentals).
In my second year, I started hunting for internships and landed one at Karnataka Hybrid Micro Devices Ltd. (KHMDL) — a big deal in the electronics industry, supplying to ISRO, TVS, HAL, and other major players.
Build a defect detection system for TFR circuits (Thick Film Resistors). Sounded exciting. But then, reality hit:
🚨 No public datasets.
🚨 No guidance.
🚨 No prior experience with AI.
I had to figure everything out myself.
What I did:
- Manually collected 500+ defect images and annotated them using Roboflow.
- Tested Detectron2 (an object detection framework) but failed due to limited data.
- Switched to image classification with CNNs and later Inception V3 (a pre-trained model for fine-grained analysis), achieving 87% accuracy.
I finished my internship but left with a promise:
“I’ll come back and do this properly once I actually know what I’m doing.”
Next up, an internship at RBG.AI, where I worked on animal detection models for forest environments. The job? Annotate images, train models, and improve accuracy.
Did I learn a lot? Not really.
Did I complete the project? Yes.
It felt more like applying what I knew than learning something new. I realized that just training models wasn’t enough — I needed a deeper understanding of AI.
Then came the turning point. A guest lecture by the Co-Founder of Phosphene AI made me rethink my entire approach to AI.
He broke it down old-school style — teaching AI the way it was understood in the 1990s, using pure math and statistics.
And that’s when it hit me:
I had been relying on prebuilt AI tools instead of understanding how they actually worked.
I started:
✅ Writing custom activation functions instead of switching models. (e.g., tweaking ReLU for better gradient flow).
✅ Analyzing data distributions instead of blindly optimizing hyperparameters.
✅ Thinking like a mathematician instead of a “plug-and-play” ML engineer.
Suddenly, AI became fun again.
At some point, doubt started creeping in. With AI evolving at breakneck speed and new breakthroughs dropping every other week, I started questioning — was I on the right path? Had I learned enough? Was I just scratching the surface while the field kept moving ahead?
Instead of overthinking, I decided to just push through and participate in hackathons.
One such event was the Smart India Hackathon (SIH) Finals, where my team and I built a full-fledged Mental Health Web Application and launched it.
🛠 Designed a chatbot using Hugging Face models (pre-trained NLP transformers)
🚀 Ensured a smooth user experience for real-time mental health interactions.
We named the application ReboundX and deployed it:
👉 Check it out here
Unfortunately, we didn’t win, but the experience was game-changing. I learned how to work under pressure, integrate AI in real-world applications, and collaborate with a solid team.
Somewhere in between fighting with ML models and diving into new AI concepts, I stumbled into Frontend Development.
It started as a side quest — I wanted to showcase my AI models better. What’s the point of building a great model if no one can use it, right?
So, I picked up React.js, and before I knew it, I was:
✅ Building dashboards for my AI projects.
✅ Learning state management, APIs, and UI/UX principles.
✅ Creating a personal portfolio, to-do app, weather app, and calculator just for fun.
Then came a real-world project:
A consultant company approached my team to build a Facial Emotion Detection Dashboard. We divided the work:
🧠 One friend trained the AI model.
🖥️ I built the React-based frontend and handled integration.
The final result? A clean, interactive dashboard that:
🎭 Detected emotions in real-time.
📊 Displayed visual analytics on emotions.
🔄 Seamlessly integrated AI with UI.
We named it Revealix.ai — and that project flipped a switch in me: frontend isn’t just presentation, it’s how people experience AI.
In a way, Frontend Development became my bridge between AI and making my projects actually usable.
Remember that first project I struggled with? I kept my word and returned— but this time, with experience and a new approach.
Instead of forcing AI where it wasn’t needed, I built a simple yet effective image subtraction-based solution:
🛠 Designed a custom rig with an adjustable camera stand — like a microscope setup, but swapping the lens for a camera.
🛠 Built a calibrated camera system to capture reference images.
🛠 Compared new images to detect defects automatically.
🛠 Stored defect data in a structured SQL database for future AI training.
✅ Built a Streamlit dashboard for real-time data visualization.
The result?
💡 Instant defect detection.
💡 Scalable dataset for future AI improvements.
💡 And, they loved it !
Lesson learned: AI isn’t always the answer — A well-designed non-AI solution works just as well. Sometimes, the best AI solution is collecting better data first.
During my time at KHMDL, I met an Engineer from Keyence, a global leader in industrial vision systems.
I asked how their AI model achieved near-perfect accuracy in defect detection.
His answer?
“We trained it on 10 years of factory data.”
They had trained their AI on massive, high-quality datasets, making their models unstoppable.