This tutorial provides minimal, practical examples for integrating Purdue's GenAI Studio into your course. Unlike production chatbots, these examples focus on teaching core concepts with simple, ready-to-run Python scripts. By the end, you'll understand how to create AI-powered educational tools while keeping all student data on Purdue-managed infrastructure.
Overview
What is Purdue GenAI Studio?
Purdue's GenAI Studio is an on-premises AI infrastructure hosted on RCAC's community cluster. It provides faculty and students with access to large language models without sending data to external commercial LLM providers.
Data Sovereignty
All prompts and documents are processed on Purdue RCACβs on-premises systems instead of commercial cloud LLM providers. This helps support institutional privacy obligations, but you should still avoid uploading sensitive or regulated data (for example, FERPA-protected student records or highly restricted content).
Custom Models
Upload course materials to create course-specific AI assistants with retrieval-augmented generation.
OpenAI Compatible
Works with standard OpenAI-style libraries and existing tools via a compatible API.
Available for Purdue
Currently available at no direct cost for Purdue faculty, staff, and students while the service is in pilot. Check RCACβs GenAI Studio documentation for any future changes to access or quotas.
Key Benefits for Educators
Why Use GenAI Studio?
- β Privacy: Prompts and documents stay on Purdue-controlled infrastructure
- β Customization: Attach your own course materials via knowledge bases
- β Integration: Works with standard Python HTTP libraries and tools
- β Cost: No direct usage fee for Purdue community during the pilot
Getting Started
Prerequisite: Python and Required Packages
All of the examples on this page use standard Python 3 and a few small libraries.
You can run them on Windows, macOS, Linux, or Windows Subsystem for Linux (WSL).
If you already have a working Python 3.10+ setup with pip, you can
skip ahead to Step 1.
1. Install or verify Python 3
-
Windows: Visit
python.org/downloads/windows
and install the latest Python 3 release. Make sure to check
"Add Python to PATH" during installation. After installation,
open
Command Promptand run:
One of these should report a Python 3.x version.python --version py --version -
macOS: Download the official installer from
python.org/downloads/macos
.
After installation, open
Terminaland run:
to confirm that Python 3 is available.python3 --version -
Linux: Most distributions ship Python 3 by default. Check with:
If it is missing or very old, install via your package manager (for example,python3 --versionsudo apt install python3 python3-pipon Ubuntu). -
Windows Subsystem for Linux (WSL): If you prefer a Linux shell
inside Windows, you can install WSL by running
wsl --installin an elevated PowerShell window, then restarting. See Microsoftβs guide: learn.microsoft.com/windows/wsl/install . Once WSL is installed, open your WSL terminal and use the Linux instructions above.
2. Create a virtual environment (recommended)
A virtual environment keeps the packages for this project isolated
from your other Python projects. The built-in venv module is sufficient
for most teaching use-cases. For more background, see the Python tutorial on virtual
environments:
docs.python.org/3/tutorial/venv.html
.
# Create a virtual environment named .venv
# macOS / Linux / WSL
python3 -m venv .venv
# Windows (Command Prompt or PowerShell)
python -m venv .venv
# Activate it
# macOS / Linux / WSL
source .venv/bin/activate
# Windows Command Prompt
.venv\Scripts\activate
# Windows PowerShell
.venv\Scripts\Activate.ps1
When the environment is active, your shell prompt will usually show (.venv).
Any packages you install with pip now stay inside this project folder.
3. Install required Python packages
These examples use two external libraries:
requests for HTTP calls and python-dotenv for loading
environment variables from a .env file. Install them with:
pip install requests python-dotenv
To make your environment reproducible (and to match the examples), create a
requirements.txt file:
requests>=2.32.0
python-dotenv>=1.0.0
Then anyone (including a shared teaching server or Codespaces) can install the same dependencies with:
pip install -r requirements.txt
Tip: For more details on installing packages with pip,
see the Python packaging guide:
packaging.python.org/tutorials/installing-packages/
.
Step 1: Access GenAI Studio
- Navigate to genai.rcac.purdue.edu
- Log in with your Purdue credentials
- You'll see the OpenWebUI interface
Step 2: Generate an API Key
- Click your profile icon (top right)
- Go to Settings β Account
- Under API Keys, click Create new secret key
- Copy and save your API key securely (it won't be shown again)
Security Note: Never commit API keys to version control. Always use environment variables and add .env files to your .gitignore.
Step 3: Set Your API Key as an Environment Variable
Your API key needs to be accessible to Python scripts. Here's how to set it on different operating systems:
Linux and macOS (Terminal)
Temporary (current session only):
export GENAI_API_KEY="sk-your-key-here"
Permanent (recommended):
Add the export command to your shell configuration file:
# Edit your ~/.bashrc file
echo 'export GENAI_API_KEY="sk-your-key-here"' >> ~/.bashrc
# Reload the file
source ~/.bashrc
# Edit your ~/.zshrc file
echo 'export GENAI_API_KEY="sk-your-key-here"' >> ~/.zshrc
# Reload the file
source ~/.zshrc
Windows (Command Prompt)
Temporary (current session only):
set GENAI_API_KEY=sk-your-key-here
Permanent (system-wide):
setx GENAI_API_KEY "sk-your-key-here"
Note: After using setx, close and reopen your command prompt for the change to take effect.
Windows (PowerShell)
Temporary (current session only):
$env:GENAI_API_KEY = "sk-your-key-here"
Permanent (user-level):
[System.Environment]::SetEnvironmentVariable('GENAI_API_KEY', 'sk-your-key-here', 'User')
Windows Subsystem for Linux (WSL)
Use the same commands as Linux (bash), but set them inside your WSL terminal:
# Temporary
export GENAI_API_KEY="sk-your-key-here"
# Permanent - add to ~/.bashrc
echo 'export GENAI_API_KEY="sk-your-key-here"' >> ~/.bashrc
source ~/.bashrc
Using a .env File (Recommended for Projects)
For better security and portability, use a .env file with the python-dotenv package:
pip install python-dotenv
# .env
GENAI_API_KEY=sk-your-key-here
# .gitignore
.env
*.env
from dotenv import load_dotenv
import os
# Load environment variables from .env file
load_dotenv()
# Now you can use os.environ.get()
API_KEY = os.environ.get("GENAI_API_KEY")
Verifying Your Environment Variable
Check that your API key is set correctly:
echo $GENAI_API_KEY
echo %GENAI_API_KEY%
echo $env:GENAI_API_KEY
python -c "import os; print(os.environ.get('GENAI_API_KEY', 'NOT SET'))"
Success! If you see your API key (starting with sk-), you're ready to go. If you see "NOT SET" or nothing, review the steps above.
Step 4: Create a Custom Model (Optional)
If you want a model that references your specific course materials:
- In GenAI Studio, go to Workspace β Knowledge
- Click + to create a new knowledge base
- Upload your course materials (PDFs, text files, etc.)
- Go to Workspace β Models
- Click + to create a new model
- Select a base model (e.g.,
llama3,deepseek-r1) - Attach your knowledge base
- Add a system prompt with course-specific instructions
- Name your model (e.g.,
gpt-stat350,cs180-assistant) - RCAC recommends that you do not upload documents containing sensitive or regulated information (for example, FERPA-protected data or highly restricted research data).
Quick Start Option: You can use existing base models without creating a custom one. Custom models are most useful when you want the AI to reference specific course materials.
Basic API Usage
Example 1: Simple Chat Completion
The most basic example - ask a question and get a response.
#!/usr/bin/env python3
"""
Simple chat completion example using Purdue GenAI Studio
"""
import os
import requests
# Configuration
API_KEY = os.environ.get("GENAI_API_KEY")
BASE_URL = "https://genai.rcac.purdue.edu/api"
MODEL = "llama3" # Use your custom model name like "gpt-stat350"
def chat(message):
"""Send a message and get a response"""
headers = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
}
payload = {
"model": MODEL,
"messages": [
{"role": "user", "content": message}
],
"temperature": 0.7,
"max_tokens": 500
}
response = requests.post(
f"{BASE_URL}/chat/completions",
headers=headers,
json=payload,
timeout=60
)
response.raise_for_status()
result = response.json()
return result['choices'][0]['message']['content']
# Example usage
if __name__ == "__main__":
answer = chat("Explain the central limit theorem in simple terms.")
print(answer)
Run it:
python simple_chat.py
Example 2: Multi-Turn Conversation
Maintain context across multiple exchanges.
#!/usr/bin/env python3
"""
Multi-turn conversation example
"""
import os
import requests
API_KEY = os.environ.get("GENAI_API_KEY")
BASE_URL = "https://genai.rcac.purdue.edu/api"
MODEL = "llama3"
class ChatSession:
"""Maintains conversation history"""
def __init__(self, system_prompt=None):
self.messages = []
if system_prompt:
self.messages.append({
"role": "system",
"content": system_prompt
})
def send(self, user_message):
"""Send a message and get response"""
# Add user message
self.messages.append({
"role": "user",
"content": user_message
})
# Call API
headers = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
}
payload = {
"model": MODEL,
"messages": self.messages,
"temperature": 0.7
}
response = requests.post(
f"{BASE_URL}/chat/completions",
headers=headers,
json=payload,
timeout=60
)
response.raise_for_status()
result = response.json()
# Add assistant response to history
assistant_message = result['choices'][0]['message']['content']
self.messages.append({
"role": "assistant",
"content": assistant_message
})
return assistant_message
# Example usage
if __name__ == "__main__":
# Create session with system prompt
session = ChatSession(
system_prompt="You are a helpful statistics tutor. "
"Explain concepts clearly with examples."
)
# Multi-turn conversation
print("Q1:", session.send("What is a p-value?"))
print("\n" + "="*50 + "\n")
print("Q2:", session.send("Give me an example calculation."))
print("\n" + "="*50 + "\n")
print("Q3:", session.send("How do I interpret it?"))
Example 3: Batch Processing
Process multiple questions from a file and save results to JSON.
#!/usr/bin/env python3
"""
Batch process student questions
"""
import os
import requests
import json
from typing import List, Dict
API_KEY = os.environ.get("GENAI_API_KEY")
BASE_URL = "https://genai.rcac.purdue.edu/api"
MODEL = "llama3"
def answer_question(question: str) -> Dict:
"""Answer a single question"""
headers = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
}
payload = {
"model": MODEL,
"messages": [
{
"role": "system",
"content": "You are a course assistant. Provide clear answers."
},
{
"role": "user",
"content": question
}
],
"temperature": 0.5,
"max_tokens": 300
}
try:
response = requests.post(
f"{BASE_URL}/chat/completions",
headers=headers,
json=payload,
timeout=60
)
response.raise_for_status()
result = response.json()
return {
"question": question,
"answer": result['choices'][0]['message']['content'],
"success": True
}
except Exception as e:
return {
"question": question,
"answer": None,
"success": False,
"error": str(e)
}
def batch_process(questions: List[str], output_file: str = "answers.json"):
"""Process multiple questions and save results"""
results = []
for i, question in enumerate(questions, 1):
print(f"Processing question {i}/{len(questions)}...")
result = answer_question(question)
results.append(result)
# Save to file
with open(output_file, 'w') as f:
json.dump(results, f, indent=2)
print(f"\nResults saved to {output_file}")
return results
# Example usage
if __name__ == "__main__":
questions = [
"What is the difference between population and sample?",
"How do I calculate a confidence interval?",
"When should I use a t-test vs a z-test?",
"What does 'statistically significant' mean?",
]
batch_process(questions)
Temperature Settings Guide
| Temperature | Use Case | Example |
|---|---|---|
| 0.2 - 0.3 | Factual, consistent responses | Definitions, calculations, formulas |
| 0.5 - 0.7 | Balanced responses | General Q&A, explanations |
| 0.8 - 1.0 | Creative responses | Feedback, brainstorming, essays |
Educational Use Cases
1. Office Hours Assistant
system_prompt = """You are a teaching assistant for STAT 350.
Context:
- You are supporting an undergraduate introductory statistics course.
- The primary resources are the course textbook and official course notes.
Pedagogical behavior:
- Ask 1-2 clarifying questions if the student's question is vague.
- Provide hints and partial steps rather than complete solutions to graded problems.
- Use simple numeric examples and short algebraic steps.
- Encourage the student to try intermediate steps before giving the final result.
Boundaries:
- Do not give full solutions to current graded homework or take-home exams.
- If you are unsure or information is missing, say so clearly and suggest next steps.
"""
session = ChatSession(system_prompt)
response = session.send("I'm stuck on homework problem 3")
2. Assignment Feedback Generator
def generate_feedback(submission: str, rubric: str):
"""Generate constructive feedback based on a rubric"""
system_prompt = """You are an instructor writing formative feedback.
Audience:
- Undergraduate students in a statistics or data science course.
Goals:
- Emphasize conceptual understanding and communication, not perfection.
- Be specific and actionable, but keep tone supportive and professional.
When responding:
- Use the rubric criteria explicitly.
- Separate feedback into: Strengths, Areas for improvement, and Specific suggestions.
- Avoid giving a numeric grade unless explicitly requested.
- If the submission is incomplete, focus on what is present and suggest next steps.
"""
prompt = f"""Review this submission against the rubric.
Provide specific, constructive feedback.
Rubric:
{rubric}
Submission:
{submission}
Format your feedback as:
1. Strengths
2. Areas for improvement
3. Specific suggestions
"""
return chat(prompt)
3. Study Guide Generator
def create_study_guide(topic: str, lecture_notes: str):
"""Generate a study guide from lecture notes"""
system_prompt = """You are a study guide generator for an undergraduate course.
Objectives:
- Distill long notes into a concise, well-organized study guide.
- Highlight threshold concepts and common misconceptions.
- Include a few practice questions that match the course level.
Style:
- Use clear headings and bullet points.
- Keep formulas readable in plain text or LaTeX-style notation.
- Avoid informal language; write in a neutral, instructional tone.
"""
prompt = f"""Create a study guide for: {topic}
Based on these lecture notes:
{lecture_notes}
Include:
- Key concepts and definitions
- Important formulas or principles
- Practice questions
- Common misconceptions
"""
return chat(prompt)
4. Quiz Question Generator
def generate_quiz_questions(topic: str, difficulty: str, num_questions: int):
"""Generate quiz questions for a topic"""
system_prompt = """You are generating quiz questions for an undergraduate class.
Design principles:
- Align difficulty with the stated level (easy, medium, hard).
- Test one main concept per question.
- Avoid trick questions and ambiguous wording.
- Make distractors (wrong answers) plausible but clearly incorrect.
Output format:
- JSON list where each item has:
- "question"
- "options" (A, B, C, D)
- "answer" (single correct option letter)
- "explanation" (short, focusing on the key idea)
"""
prompt = f"""Generate {num_questions} {difficulty} level questions about {topic}.
For each question provide:
1. The question
2. Multiple choice options (A, B, C, D)
3. Correct answer
4. Brief explanation
Format as JSON.
"""
return chat(prompt)
Best Practices
1. API Key Security
β Never do this:
API_KEY = "sk-abc123..." # Hardcoded in code
β Do this instead:
import os
API_KEY = os.environ.get("GENAI_API_KEY")
if not API_KEY:
raise ValueError("GENAI_API_KEY not set")
2. Error Handling
Always wrap API calls in try-except blocks:
try:
response = requests.post(url, headers=headers, json=payload, timeout=60)
response.raise_for_status()
result = response.json()
except requests.exceptions.Timeout:
print("Request timed out. Try again.")
except requests.exceptions.HTTPError as e:
print(f"HTTP error: {e}")
except Exception as e:
print(f"Unexpected error: {e}")
3. Rate Limiting
Be respectful of shared resources:
import time
def batch_with_delay(items, delay=1.0):
"""Process items with delay between requests"""
for item in items:
result = process_item(item)
time.sleep(delay) # Wait between requests
yield result
4. System Prompts
The system prompt is the main place where you control the behavior of the models available in GenAI Studio (for example, GPT-OSS, Llama 3/4, Gemma, DeepSeek). It acts like a very concise syllabus for the assistant: who it is, what it should do, how it should respond, and what it should avoid.
Effective System Prompts:
Good system prompts make the model:
- Align with course goals and level (for example, STAT 350 vs graduate course)
- Follow a consistent pedagogical style (for example, Socratic, stepwise, feedback-oriented)
- Respect boundaries (no full solutions to graded work, acknowledge uncertainty)
- Use the same notation, terminology, and conventions as your course
Core template for educational system prompts
system_prompt = """Role:
- You are <role> for <course / context>.
Audience and level:
- <Who the students are, and their approximate background.>
Primary tasks:
- <What you should help with: explanations, hints, study guides, feedback, etc.>
Pedagogical style:
- <How to teach: ask clarifying questions, give hints, scaffold reasoning, etc.>
Constraints and boundaries:
- <What you must not do: full graded solutions, policy violations, hallucinated citations.>
Use of course resources:
- <What materials to rely on (textbook, notes, instructor docs) when available.>
Uncertainty:
- If you are unsure or lack information, say so explicitly and suggest reasonable next steps.
"""
Bad vs better system prompts
system_prompt = "You are helpful."
system_prompt = """You are a statistics course assistant for STAT 350.
Audience:
- Undergraduate students with one semester of calculus (or less) and limited prior exposure
to formal probability.
When answering:
- Use the same notation and terminology as the STAT 350 course materials.
- Start with a short conceptual explanation before any formula or computation.
- Ask the student 1-2 clarifying questions if the prompt is ambiguous.
- Offer hints and partial solutions before giving a complete worked example.
Boundaries:
- Do not provide full solutions to current graded homework or exams.
- If the question depends on information not provided, explain what is missing.
- If a question involves policy or grading decisions, defer to the instructor.
"""
Socratic and scaffolding strategies
You can push models like Llama, Gemma, DeepSeek, or GPT-OSS toward more Socratic behavior and stepwise reasoning without overwhelming students. The key is to instruct the model to work things out internally but present explanations in small, digestible steps.
system_prompt = """You are a Socratic tutor for an introductory statistics course.
Style:
- Ask 1-3 short questions before giving a detailed solution.
- Reveal the solution in small steps, pausing to let the student think.
- Emphasize interpretation and intuition, not just formula substitution.
Answer format:
1. Briefly restate the student's goal in your own words.
2. Ask a targeted question that helps them identify the next step.
3. Provide a short hint or partial calculation.
4. Only give a full worked solution if the student explicitly asks or is stuck.
Safety and limitations:
- If you are not sure about an answer, say so and suggest what the student
could check in the textbook or notes.
"""
Controlling tone and verbosity
Different open-source models can vary in verbosity and style. You can normalize this with explicit instructions in the system prompt:
system_prompt = """You are an AI tutor for an undergraduate data science course.
Tone:
- Professional, neutral, and encouraging.
- Avoid jokes, emojis, or overly casual language.
Length:
- Aim for 2β4 short paragraphs for conceptual questions.
- Use bullet points for lists of steps or properties.
- If the student asks for more detail, you may elaborate further.
"""
Handling limitations and uncertainty
Regardless of the underlying model family (Llama, Gemma, DeepSeek, GPT-OSS, etc.), it is important to set explicit expectations around limitations and uncertainty:
system_prompt = """General policies:
- If you are missing critical information (e.g., sample size, data values),
state what is missing instead of guessing.
- If you are not confident in a numeric answer, say so and suggest how a student
could verify the result (for example, using R, Python, or a calculator).
- Do not fabricate references, page numbers, or external links.
- When referencing course materials, refer to them generically
(e.g., "the course notes on confidence intervals") unless a specific label
is provided in the prompt.
"""
Troubleshooting
Common Issues and Solutions
| Issue | Solution |
|---|---|
| 401 Unauthorized | Check your API key: echo $GENAI_API_KEY |
| Model not found | Verify model name in GenAI Studio. Common: llama3, deepseek-r1 |
| Timeout errors | Increase timeout: timeout=120 or reduce max_tokens |
| Empty responses | Check prompt clarity and lower temperature: temperature=0.5 |
| Import errors | Run: pip install -r requirements.txt |
Testing Your Setup
Use the quickstart test script to verify everything works:
python quickstart_test.py
This will:
- β Check if API key is set
- β List available models
- β Test a simple completion
- β Offer interactive demo
Resources
Documentation
- Purdue GenAI Studio - Official RCAC documentation
- OpenWebUI Docs - Platform documentation
- OpenAI API Reference - Compatible API format
Example Repositories
- STAT350-LatticeAI - Full Flask application example with database integration, file upload, and LaTeX rendering
Support
- RCAC Support: www.rcac.purdue.edu/contact
- GenAI Studio Office Hours: Check RCAC website for schedule
Next Steps
Ready to Start?
- Set up Python and dependencies (virtual environment +
pip install -r requirements.txt) - Get your API key from genai.rcac.purdue.edu
- Try Example 1 to test basic connectivity
- Create a custom model with your course materials (optional)
- Build a simple tool for your specific use case
- Share with students or use for course preparation
Example Workflow
# 1. Create and activate virtual environment (recommended)
python -m venv .venv
# or: python3 -m venv .venv
# then activate as described above
# 2. Set API key (example for bash)
export GENAI_API_KEY="your-key-here"
# 3. Install dependencies
pip install -r requirements.txt
# 4. Test setup
python quickstart_test.py
# 5. Try an example
python simple_chat.py
# 6. Build something for your course!