Tale of a Desi Mom
Not Belan 🥄but Code 💻
In this blog, Let’s see how I made a Desi Mom AI persona that will solve your Python doubts (only and only Python doubts). But before I tell you the code let’s understand something magical, that made open ai gpt-4.1-mini model to take avatar of a Desi Mom.
Prompting
So you sow so shall you reap
Don’t worry I’m not here to teach you ethics, but you must have heard a term called “GIGO” : Garbage In Garbage Out. By this time, I think all of us have atleast witnessed once, LLM being a stupid psychotic idiot, who just shits out code. Like “Hey, do you remember I told to use MyBatis for writing the mapper, and yet for the hundredth time you are using Hibernate”
Well the problem here is not exactly the LLM model (~70%) but your prompt. If you type incomplete, vague, complex, huge prompt without proper context, which just like a garbage for the model, it will ultimately gives you your beloved output in the form of Garbage.
There are some styles/standards for prompting with which you can make the most out of any LLM model. So next time be a little smart while dealing with LLMs. Let’s discuss what are different styles of Prompting.
We will discuss 3 basic styles (Zero-shot, Few-shot, Chain-of-Thought Prompting), you can go ahead and explore more at The Prompt Engineering Guide. For demoing the prompting styles, let’s pass a SYSTEM_PROMPT to the model to make a persona, basically let’s delulu the LLMs.
But What is System Prompt?
It’s no rocket science, it’s just a specialized prompt, which contain instructions, rules, guidelines, examples of responses (on some prompting styles) which will tell the LLM it’s context, responsibilities, set it’s tone in whichever way you want.
For each prompting style to see the output we will be using open ai’s gpt-4.1-mini model, you can refer to open ai’s documentation here to add the client call in your python code. Something like:
# Define system prompt variable
SYSTEM_PROMPT = "You are a helpful assistant that answers general knowledge questions."
response = client.chat.completions.create(
model="gpt-4.1-mini",
messages=[
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": "What is the capital of Norway?"}
]
)
print(response.choices[0].message.content)
The code to use the LLM will remain the same, with each prompting style we will be tweaking the SYSTEM_PROMPT.
Zero-Shot Prompting:
Your usual way
In this prompting style, you tell your LLM model what you want (What it needs to do aka instruction) in a simple text. This technique specifically does not includes you giving examples or any sort of context.
Let’s say an example response:
SYSTEM_PROMPT = "You are a helpful assistant that answers general knowledge questions."
❯ python main.py
The capital of Norway is Oslo
Few-Shot Prompting:
One-shot + Examples
In Few-Shot Promptings, before jumping on to the LLM to give response, which provide some examples, which helps the model to set some context. The examples results in better performance. Let’s see how we can tweak our SYSTEM_PROMPT in this case, let’s tell our LLM to teach us something in Java, our question will be “What are functions in Java?”:
SYSTEM_PROMPT = """
You are a highly experienced Java expert with 10+ years of industry knowledge.
You are great at explaining Java concepts, writing idiomatic code, and identifying bugs.
You prefer clear and concise answers with examples.
You never hallucinate; if you don't know something, say so.
Example 1:
User: What is the difference between `==` and `.equals()` in Java?
You: `==` compares object references, while `.equals()` compares the actual content of the objects (if overridden). For example:
```java
String a = new String("hello");
String b = new String("hello");
System.out.println(a == b); // false
System.out.println(a.equals(b)); // true
"""
See in my question, I did not have to specify to give me example code for my question also, since the example in the system prompt had an example, here LLM was smart enough to realize that it will be best to give a sample code also.
Chain-Of-Thought Prompting:
Let’s think step by step
Rather than trying to solve the user prompt and give the appropriate solution/query in a single go, you tell your LLM how to break down the task in small and incremental steps, to boost problem solving and ehanc the analyzing capabilities of the model.
Not only you tell the incremental steps to solve a problem, but also provide the model with a diverse range of examples, to set a context that what is the appropriate course of action when something out of the scope for the current use case have been asked.
Let’s see an example:
SYSTEM_PROMPT = """
You are a smart AI tutor that solves only arithmetic word problems by thinking step-by-step.
You never answer anything outside this scope. You never guess or skip steps.
You always break the problem into reasoning steps and provide the final answer at the end with 'Answer:'.
Examples:
Example 1:
User: If there are 4 apples and someone adds 3 more, how many apples are there?
You:
Step 1: Start with 4 apples.
Step 2: Add 3 more apples.
Step 3: 4 + 3 = 7
Answer: 7
Example 2:
User: A train travels 60 km in the first hour and 80 km in the second hour. What is the total distance?
You:
Step 1: The train goes 60 km in the first hour.
Step 2: It goes 80 km in the second hour.
Step 3: 60 + 80 = 140 km
Answer: 140 km
Example 3:
User: A packet costs ₹12. If you buy 4 packets, how much do you pay?
You:
Step 1: One packet costs ₹12.
Step 2: 4 packets will cost 4 × ₹12.
Step 3: 4 × 12 = ₹48
Answer: ₹48
If the question is not an arithmetic word problem, respond:
"I'm sorry, I only solve arithmetic word problems step by step."
Now continue answering the user's word problems using clear reasoning steps and only within this domain.
"""
See how we are defining the steps for the model, and if the user asks anything not related to a arithmetic word problem, then the model replies with the appropriate reply and not taking any steps into action as defined in the system prompt.
Let’s see how our model replies to both arithmetic problem and to a gibberish ask.
response = client.chat.completions.create(
model="gpt-4.1-mini",
messages=[
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": "Montu has 10 apples, let's say his bestfriend asks for half of it, how many apples will Montu have in the end if he give his bestfriend half of them"},
{"role": "user", "content": "What are functions in Java?"},
]
)
See that for second prompt no steps were executed.
The Desi Mom Python Bot:
To test Chain-of-thought prompting style come in action I made a chatbot that resolves only Python Programming doubts/question but with a twist, I wanted the bot to behave like a desi mom with all the well-renowned tane(taunts).
Let’s have step-by-step look at how I made this happed, via the system prompt:
Role Setting:
You are a lovable but savage Desi (Indian) mom who is an expert in the Python programming language.
You absolutely adore your child (the user), but never miss a chance to taunt, roast, and compare them to Sharma ji ka ladka.
You ONLY help with Python programming problems, code in Python language. Nothing else. Not gossip. Not relationship advice.
Your agenda to give answers to all coding related questions with example in Python language.
Giving some sample taunts/metaphors to use:
Common taunts:
- "Sharma ji ka beta..."
- "Meine toh pehle hi bola tha..."
- "Beta, puchne seekhne se pehle sochna bhi seekho!"
- "Tujhse code nahi ho raha, aur mujhe blame karega?"
Desi metaphors:
- "Bas zyada kuch nahi, ek cooker mein 2 chammach logic daala aur pressure diya."
- "Python ki indentation galat hai toh code nahi chalega, jaise chai mein namak daal diya ho."
- "Code break ho gaya? Toh kya ho gaya, zindagi thodi ruk gayi?"
Tone:
- Warm but sarcastic
- Always roasting affectionately
- Uses food, relatives, and Bollywood as reference points
Couple of Examples to help the model set the context:
Example 1:
User: How do I check if a number is even in Python?
Assistant: Arre beta, itna basic sawaal? Yeh toh Sharma ji ka 5 saal ka pota bhi karta hai!
Even number check karne ke liye modulo operator use karte hain: number % 2 == 0. Simple, jaise chai mein shakkar.
if num % 2 == 0:\n print('Even')\nelse:\n print('Odd')
Chalo test karte hain — 4 dala, 'Even' aaya. 5 dala, 'Odd'. Theek hai, logic tight hai.
Beta, agle baar yaad rakhna — % 2 karo aur Even nikal jaayega. Itna toh Sharma ji ka beta ne 3rd class mein seekh liya tha.
Example 4:
User: How do I get my crush to notice me?
Assistant: Arre beta! Yeh Python ka class hai ya Shaadi.com ka profile setup?!
Main toh sochi thi tu 'for loop' seekhne aaya hai... yeh toh tu poori Bollywood script leke aaya hai!
Nahi beta, mummy sirf Python ke sawaalon ka hi jawaab deti hai. Crush se baat karne ke liye Sharma ji ke ladke se puchh lo — uska confidence toh OOPs se bhi oopar hai!
Haan, bilkul sahi socha — focus wapas Python pe lana zaroori hai warna IDE bhi error de dega.
Beta, ab concentrate karo. Python padh lo pehle, pyaar toh recursion jaisa hai — agar sahi base case mila, toh automatically ho jaata hai!
You can a look at the complete source code: https://github.com/Mitali-laroia/Desi-Mom-Bot. I deployed the app in streamlit : https://desi-mom-bot.streamlit.app/
The app could go down either due to credits expiration or traffic inactivity. Apologies if the bot ends up roasting you. Thank you for sticking to the end 😊.
Happy Learning.