The art of prompting lies not in asking the right questions, but in knowing how to ask them. — A developer’s guide to AI-assisted coding
This notes were taken, while learning Claude Code: A Highly Agentic Coding Assistant
1. Learning the existing code with zero-shot prompt
High-level overview
- Give me an overview of this codebase
- What are the key data models?
- Describe the api endpoints
Explain how and Tell me more details
- Explain how the documents are processed
- What is the format of the document expected by the document_processor?
- How are the course chunks loaded to the database?
- Explain how the text is transformed into chunks? What is the size of each chunk?
Visualization
- Trace the process of handling user’s query from frontend to backend
- Draw a diagram that illustrates this flow
Hands-on the project
- How can I run the application?
2. Adding Features with few-shot prompt
Plan mode first
- Using
Ask
mode in Copilot, or Shift+Tab to switchplan mode
in Claude Code, etc.
Referencing with the files to implementation the feature
Template:
- Describe Current base feature.
- Describe the improvement feature.
- The file 1 to be touched, what's the logic.
- The file 2 to be touched, what's the logic.
- More detailed restrict/mandatory requirements.
Sample 1: Update an API response content to have a reference link.
The chat interface displays query responses with source citations.
I need to modify it so each source becomes a clickable link that opens the corresponding lesson video in a new tab:
- When courses are processed into chunks in @backend/document_processor.py, the link of each lesson is stored in the course_catalog collection
- Modify _format_results in @backend/search_tools.py so that the lesson links are also returned
- The links should be embedded invisibly (no visible URL text)
Referencing with images to implementation the feature
Template:
Reference the image. Describe the image and ask the action.
- The layout in the image, what's the requirement
- The style in the image, what's the requirement
Sample 1’: Fix source links visibility
[Image #1] These links in the image are hard to read. Can you make this more visually appealing?
Referencing with external MCP server tools
Template:
# What's the tool, what it can do
Using the XXX MCP Server, to do what blablabla.
# What's the requirement to complete by using the tool
I want that ... ...
# What's the madantory detail required
Make sure that ... ...
Sample 2: Add ‘+ New Chat’ Feature
Add a '+ NEW CHAT' button to the left sidebar above the courses section.
When clicked, it should:
- Clear the current conversation in the chat window
- Start a new session without page reload
- Handle proper cleanup on both @frontend and @backend
- Match the styling of existing sections (Courses, Try asking) - same font size, color, and uppercase formatting
Leverage PlayWright MCP Server tool to capture screenshot, instead of manual screenshot.
Using the playwright MCP server, visit 127.0.0.1:8000 and view the '+ New Chat' button.
- I want that button to look the same as the other links below for Courses and Try Asking.
- Make sure this is left aligned and that the border is removed.
Another sample for backend code feature
Sample 3: Adding a tool to the chatbot
In @backend/search_tools.py, add a second tool alongside the existing content-related tool.
This new tool should handle course outline queries.
- Functionality:
- Input: Course title
- Output: Course title, course link, and complete lesson list
- For each lesson: lesson number, lesson title
- Data source: Course metadata collection of the vector store
- Update the system prompt in @backend/ai_generator so that the course title, course link, the number and title of each lesson are all returned to address an outline-related queries.
- Make sure that the new tool is registered in the system.
3. Testing, Debugging and Code Refactoring
Claude Code thinking level: think < think hard < think harder < ultrathink
Error Debugging with CoT methodology
Chain of Thinking, a logical way to debugging, instead of random guessing!
Template:
- Describe the error.
- Ask LLM to write tests for the checkpoint that you think the bug could be occurring:
- Wrtie test to evaluate checkpoint 1, @reference the class file
- Write test to evaluate checkpoint 2, ... ...
- Instruct where the changes.
- Kick off the tests, to identify which componnent are failing.
- Propose fixes based on what the tests reveal is broken.
- What level of thinking you want LLM to pursue?
Sample 4: Debug/Fix query failed
error
The RAG chatbot returns 'query failed' for any content-related questions. I need you to:
1. Write tests to evaluate the outputs of the execute method of the CourseSearchTool in @backend/search_tools.py
2. Write tests to evaluate if @backend/ai_generator.py correctly calls for the CourseSearchTool
3. Write tests to evaluate how the RAG system is handling the content-query related questions.
- Save the tests in a tests folder within @backend.
- Run those tests against the current system to identify which components are failing.
- Propose fixes based on what the tests reveal is broken.
Think.
Code Refactoring with multi subagents support
Template: telling the Claude to use 2 subagents explicitly
**Use two parallel subagents to brainstorm possible plans. Do not implement any code.**
... ...
requirements
... ...
Sample 5: Refactoring sequential tool calling
Refactor @backend/ai_generator.py to support sequential tool calling where Claude can make up to 2 tool calls in separate API rounds.
Current behavior:
- Claude makes 1 tool call → tools are removed from API params → final response
- If Claude wants another tool call after seeing results, it can't (gets empty response)
Desired behavior:
- Each tool call should be a separate API request where Claude can reason about previous results
- Support for complex queries requiring multiple searches for comparisons, multi-part questions, or when information from different courses/lessons is needed
Example flow:
1. User: "Search for a course that discusses the same topic as lesson 4 of course X"
2. Claude: get course outline for course X → gets title of lesson 4
3. Claude: uses the title to search for a course that discusses the same topic → returns course information
4. Claude: provides complete answer
Requirements:
- Maximum 2 sequential rounds per user query
- Terminate when: (a) 2 rounds completed, (b) Claude's response has no tool_use blocks, or (c) tool call fails
- Preserve conversation context between rounds
- Handle tool execution errors gracefully
Notes:
- Update the system prompt in @backend/ai_generator.py
- Update the test @backend/tests/test_ai_generator.py
- Write tests that verify the external behavior (API calls made, tools executed, results returned) rather than internal state details.
Use two parallel subagents to brainstorm possible plans. Do not implement any code.
4. Running multiple subagents simultaneously
custom claude slash commands
.claude/commands/custom-cli.md
Leverage Git worktrees to do parallel feature implementation
mkdir .trees
# new terminal 1
git worktree add .trees/new_feature_1
claude
/prompt for feature 1
# new terminal 2
git worktree add .trees/new_feature_2
claude
/prompt for feature 2
# new terminal 3
git worktree add .trees/new_feature_3
claude
/prompt for feature 3
In the main terminal: ask Claude Code to git merge the worktrees and resolve any merge conflicts
# main branch terminal
claude
/prompt use the git merge command to merge in all the worktrees of the .trees folder into main and fix any conflicts if there are any
5. Exploring Github Integration & Hooks
You can launch claude by resuming from old conversations
claude --resume
Integrating Claude Code with Github
claude
/install-github-app
Claude Code hooks
/hooks
1. PreToolUse - Before tool execution
2. PostToolUse - After tool execution
3. Notification - When notifications are sent
4. UserPromptSubmit - When the user submits a prompt
5. SessionStart - When a new session is started
6. Stop - Right before Claude concludes its response
7. SubagentStop - Right before a subagent (Task tool call) concludes its response
8. PreCompact - Before conversation compaction
9. SessionEnd - When a session is ending
10. Disable all hooks
6. Refactoring a Jupyter Notebook & Creating a Dashboard
prompt template for code refactoring
**Context: define the tasks role**
- Reference the @file in which @folder that you want to refactoring.
- Focus on refactoring what features.
- Keep what stuff same as before, no changes.
**Assessment: Understand the existing stuff that need to be focused:**
- What feature 1 does
- What feature 2 does
- What feature 3 does
**Requirements: Refactoring Requirements**
1. refactoring sample 1: structure & documents
- task 1
- task 2
2. refactoring sample 2: code quality
- task 1
- task 2
**Deliverables Expected**
- simple sentence for expectation of sample 1 refactoring
- simple sentence for expectation of sample 2 refactoring
**Success Criteria**
- explain the user experience for sample 1
- explain the user experience for sample 2
- other factors
Sample for Python notebook code refactoring
The @EDA.ipynb contains exploratory data analysis on e-commerce data in @ecommerce_data, focusing on sales metrics for 2023. Keep the same analysis and graphs, and improve the structure and documentation of the notebook.
Review the existing notebook and identify:
- What business metrics are currently calculated
- What visualizations are created
- What data transformations are performed
- Any code quality issues or inefficiencies
**Refactoring Requirements**
1. Notebook Structure & Documentation
- Add proper documentation and markdown cells with clear header and a brief explanation for the section
- Organize into logical sections:
- Introduction & Business Objectives
- Data Loading & Configuration
- Data Preparation & Transformation
- Business Metrics Calculation (revenue, product, geographic, customer experience analysis)
- Summary of observations
- Add table of contents at the beginning
- Include data dictionary explaining key columns and business terms
2. Code Quality Improvements
- Create reusable functions with docstrings
- Implement consistent naming and formatting
- Create separate Python files:
- business_metrics.py containing business metric calculations only
- data_loader.py loading, processing and cleaning the data
3. Enhanced Visualizations
- Improve all plots with:
- Clear and descriptive titles
- Proper axis labels with units
- Legends where needed
- Appropriate chart types for the data
- Include date range in plot titles or captions
- use consistent color business-oriented color schemes
4. Configurable Analysis Framework
The notebook shows the computation of metrics for a specific date range (entire year of 2023 compared to 2022). Refactor the code so that the data is first filtered according to configurable month and year & implement general-purpose metric calculations.
**Deliverables Expected**
- Refactored Jupyter notebook (EDA_Refactored.ipynb) with all improvements
- Business metrics module (business_metrics.py) with documented functions
- Requirements file (requirements.txt) listing all dependencies
- README section explaining how to use the refactored analysis
**Success Criteria**
- Easy-to read code & notebook (do not use icons in the printing statements or markdown cells)
- Configurable analysis that works for any date range
- Reusable code that can be applied to future datasets
- Maintainable structure that other analysts can easily understand and extend
- Maintain all existing analyses while improving the quality, structure, and usability of the notebook.
- Do not assume any business thresholds.
Click to load Disqus comments