Demo 03a: AWS Hands-On
EE 547: Spring 2026
This demo walks through core AWS services: launching EC2 instances, configuring IAM roles, deploying an application, and creating a custom VPC. By the end, you’ll have deployed a web application that invokes AWS Bedrock foundation models.
Prerequisites:
- AWS account with console access
- AWS CLI installed and configured locally
- SSH client (Terminal on Mac/Linux, or PuTTY on Windows)
- Docker installed locally
- Python 3.10+ installed locally
Region: All resources are created in us-west-2 (Oregon).
Naming convention: Resources are tagged with ee547-demo1 for easy identification and cleanup.
Console Tour
The AWS Management Console is a web interface to AWS services. Everything you do in the console translates to API calls—the same calls you’ll make from the CLI and SDK. The console is useful for exploration and one-off tasks; automation requires the CLI or SDK.
Region and Account
Before doing anything, check two things in the top-right corner of the console:
Region selector — Displays the current region (e.g., “Oregon” or “us-west-2”). Click to change regions. Set region to us-west-2 (Oregon).
Account menu — Click your account name to see your 12-digit Account ID. You’ll need this for naming resources (e.g., S3 buckets). The account ID uniquely identifies your AWS account and appears in ARNs.
Every AWS account has a root user—the email address used to create the account. The root user has unrestricted access to everything and cannot be limited by IAM policies. Don’t use the root user for daily work. Instead, create an IAM user with admin privileges and use that. Reserve the root user for account-level tasks like changing the account email or closing the account.
Region matters because:
- Most resources exist in a specific region
- Resources in different regions can’t directly interact (an EC2 instance in
us-east-1can’t attach an EBS volume fromus-west-2) - Pricing varies by region
- Some services (like Bedrock) have different model availability per region
The console remembers your region selection, but verify it whenever you start working—accidentally creating resources in the wrong region is a good way to find a costly surprise at the end of the month.
EC2 Dashboard
Navigate to EC2 using the search bar or Services menu.
The EC2 dashboard shows a summary of your compute resources. The left sidebar organizes resources by category:
Instances — Your virtual machines. This is the main view. Each instance has an ID (i-0abc123...), a state (running, stopped, terminated), and associated resources (security groups, volumes, network interfaces).
Images (AMIs) — Amazon Machine Images are templates containing an operating system and optionally pre-installed software. When you launch an instance, you select an AMI. AWS provides public AMIs for common operating systems; you can also create your own.
Security Groups — Firewall rules controlling what traffic can reach your instances. Security groups are stateful: if you allow inbound traffic on a port, the response is automatically allowed outbound.
Key Pairs — SSH key pairs for authenticating to Linux instances. AWS stores the public key; you download and keep the private key. Lose the private key and you lose SSH access to instances using that key pair.
Volumes — EBS storage. Volumes are network-attached block storage that persist independently of instances. When you terminate an instance, attached volumes can either be deleted or retained depending on configuration.
When you select a running instance, the Monitoring tab shows basic metrics: CPU utilization, network traffic, disk I/O, and status checks. These metrics are collected automatically—no agent installation required. For memory and disk usage metrics, you’d need to install the CloudWatch agent, but CPU and network are always available.
IAM Dashboard
Navigate to IAM (Identity and Access Management).
IAM is global—it doesn’t belong to a region. Users, roles, and policies you create here apply across all regions.
Users — Human identities. Each user can have console access (password), programmatic access (access keys), or both.
Roles — Identities that AWS services or applications can assume. Unlike users, roles don’t have permanent credentials. Instead, when something assumes a role, it receives temporary credentials that expire. This is how EC2 instances access other AWS services without hardcoded keys.
Policies — JSON documents defining permissions. A policy specifies which actions are allowed or denied on which resources. Policies attach to users, groups, or roles.
S3 Console
Navigate to S3.
S3 is also conceptually global—the bucket list shows all buckets in your account regardless of which region the console is set to. However, each bucket exists in a specific region, and the data physically resides there.
We won’t use S3 in this demo, but note:
- Bucket names are globally unique across all AWS accounts
- Objects are identified by key (a string that looks like a path, but S3 has no directories)
- S3 is object storage, not a filesystem—different access patterns and constraints
Bedrock Console
Navigate to Amazon Bedrock.
Bedrock provides access to foundation models from multiple providers (Anthropic, Amazon, Meta, etc.) through a unified API. You don’t manage model infrastructure—you make API calls and pay per token.
Bedrock model availability varies by region. Verify you’re in us-west-2 before proceeding.
Model Access
Navigate to Bedrock → Model catalog.
The catalog lists available foundation models by provider. Select any Claude model to view details and pricing.
If this is your first time using Anthropic models, click Request access and complete the use case form. Approval takes approximately 15 minutes.
We’ll use these models in the demo:
| Model | Provider | Model ID |
|---|---|---|
| Claude Haiku 4.5 | Anthropic | us.anthropic.claude-haiku-4-5-20251001-v1:0 |
| Claude Sonnet 4.5 | Anthropic | us.anthropic.claude-sonnet-4-5-20250929-v1:0 |
| Nova Lite | Amazon | us.amazon.nova-lite-v1:0 |
The us. prefix indicates a cross-region inference profile.
Account administrators can restrict model access through IAM policies and Service Control Policies.
Playgrounds
Bedrock includes interactive playgrounds for testing models without writing code. The Chat playground is useful for quick experimentation—select a model, adjust parameters, and see responses.
We’ll use the API directly rather than the playground, but it’s useful for verifying access is working.
Billing and Cost Management
AWS charges accumulate as you use resources. Monitoring costs and setting alerts prevents surprises.
Accessing Billing
Click your account name (top-right) → Billing and Cost Management.
By default, only the root account can view billing. If you’re using an IAM user and get “Access Denied,” the root account holder must enable IAM access to billing in Account Settings.
Billing Dashboard
The dashboard shows:
- Month-to-date charges — Current spending this billing cycle
- Forecasted month-end charges — Projected total based on current usage
- Cost breakdown by service — Which services are costing money
Review this periodically—at minimum, check after launching new resources and before the end of each month.
Cost Explorer
Cost Explorer provides detailed historical analysis:
- Filter by service, region, or time period
- See daily or monthly trends
- Identify which resources drive costs
The first time you access Cost Explorer, it may take 24 hours to populate historical data.
Budgets and Alerts
Navigate to Budgets in the left sidebar.
Create budget → Cost budget
Set up an alert before you start creating resources:
Budget name: ee547-monthly
Budget amount: Set a threshold you’re comfortable with (e.g., $25, $50)
Alert threshold: Configure alerts at 50%, 80%, and 100% of budget
Email recipients: Your email address
Alerts are sent when spending crosses the threshold. This gives early warning before costs become significant.
Free Tier Usage
Navigate to Free Tier in the left sidebar.
This shows your consumption against free tier limits:
| Service | Free Tier Limit | Typical Usage |
|---|---|---|
| EC2 | 750 hours t2.micro/month | ~1 instance continuously |
| EBS | 30 GB | Root volumes for a few instances |
| S3 | 5 GB storage | Small datasets |
| Data Transfer | 100 GB out/month | Moderate traffic |
The free tier applies for 12 months after account creation. After that, or if you exceed limits, standard pricing applies.
What Costs Money Now
For this demo, the primary costs are:
- EC2 instances — t2.micro at ~$0.0116/hour, covered by free tier (750 hours/month) if your account is within 12 months of creation.
- Bedrock invocations — Per-token pricing. Claude Haiku 4.5 is ~$0.001/1K input tokens, $0.005/1K output tokens. A demo session costs pennies.
Terminate instances when done. Stopped instances don’t incur compute charges, but their EBS volumes still do.
CLI Basics
The AWS CLI provides command-line access to AWS services. Every console action has a CLI equivalent, and many operations are easier or only possible through the CLI. More importantly, CLI commands can be scripted and automated.
Verify CLI Installation
The AWS CLI should already be installed on your local machine. Verify:
aws --versionExpected output shows version 2.x:
aws-cli/2.15.0 Python/3.11.6 Darwin/23.0.0 source/arm64
If not installed, see the AWS CLI installation guide.
Create an IAM User
The CLI needs credentials. We’ll create an IAM user with programmatic access.
Navigate to IAM → Users → Create user.
User name: ee547-demo1-user
Click Next.
Permissions options: Attach policies directly
Search for and select AdministratorAccess.
AdministratorAccess grants full access to all AWS services. For production, create a policy with only the permissions needed. For this demo, admin access simplifies setup.
Click Next, then Create user.
Create Access Keys
Select the newly created user, then go to Security credentials tab.
Access keys → Create access key
Use case: Command Line Interface (CLI)
Check the confirmation box acknowledging the recommendation to use alternatives, then click Next.
Description: Demo CLI access
Click Create access key.
Important: Copy or download both the Access Key ID and Secret Access Key now. The secret key is only shown once. If you lose it, you’ll need to create a new key pair.
Configuration
The CLI needs credentials and a default region:
aws configureThis prompts for:
- AWS Access Key ID — From your IAM user
- AWS Secret Access Key — From your IAM user (shown only once when created)
- Default region — Use
us-west-2 - Default output format — Use
json(ortablefor more readable output)
Configuration is stored in ~/.aws/credentials and ~/.aws/config. You can have multiple profiles for different accounts or roles.
Identity Check
The first command to run after configuration—verify who the CLI thinks you are:
aws sts get-caller-identityOutput:
{
"UserId": "AIDAEXAMPLEUSERID",
"Account": "123456789012",
"Arn": "arn:aws:iam::123456789012:user/your-username"
}This confirms:
- Your credentials are valid
- Which AWS account you’re operating in
- Which IAM identity you’re using
Run this command whenever you’re unsure about your current identity—especially after switching profiles or on a new machine.
EC2 Commands
List running instances:
aws ec2 describe-instances \
--query 'Reservations[].Instances[].[InstanceId,State.Name,InstanceType,PublicIpAddress]' \
--output tableThe --query parameter uses JMESPath to filter and format output. Without it, describe-instances returns comprehensive JSON with every detail about every instance. The query above extracts just the instance ID, state, type, and public IP.
If you have no instances yet, the output will be empty. That’s expected.
List available AMIs (Ubuntu 24.04 LTS in us-west-2):
aws ec2 describe-images \
--owners 099720109477 \
--filters "Name=name,Values=ubuntu/images/hvm-ssd-gp3/ubuntu-noble-24.04-amd64-server-*" \
--query 'Images | sort_by(@, &CreationDate) | [-1].[ImageId,Name]' \
--output tableThis finds the latest Ubuntu 24.04 AMI. The owner 099720109477 is Canonical (Ubuntu’s publisher). The query sorts by creation date and takes the most recent.
Bedrock Commands
List available foundation models:
aws bedrock list-foundation-models \
--region us-west-2 \
--query 'modelSummaries[?providerName==`Anthropic` || providerName==`Amazon`].[modelId,modelName,providerName]' \
--output tableThis shows Anthropic and Amazon models available in your region. The model IDs are what you’ll use when invoking models.
Invoking a Model
Bedrock model invocation requires a JSON request body. The format varies by provider. For Claude models, create a template file with a placeholder for the prompt:
cat > prompt-template.json << 'EOF'
{
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 256,
"messages": [
{
"role": "user",
"content": "PROMPT_PLACEHOLDER"
}
]
}
EOFCreate a prompt file:
echo "Explain what an API is in one sentence." > prompt.txtGenerate the request by substituting the prompt into the template:
sed "s/PROMPT_PLACEHOLDER/$(cat prompt.txt)/" prompt-template.json > request.jsonInvoke the model:
aws bedrock-runtime invoke-model \
--model-id us.anthropic.claude-haiku-4-5-20251001-v1:0 \
--region us-west-2 \
--content-type application/json \
--accept application/json \
--cli-binary-format raw-in-base64-out \
--body file://request.json \
output.jsonThe us. prefix indicates a cross-region inference profile, which routes requests to available capacity across US regions. Anthropic models on Bedrock require this prefix.
The response is written to output.json. Extract the text:
cat output.json | jq -r '.content[0].text'To try a different model, change the --model-id:
# Claude Sonnet 4.5
aws bedrock-runtime invoke-model \
--model-id us.anthropic.claude-sonnet-4-5-20250929-v1:0 \
--region us-west-2 \
--content-type application/json \
--accept application/json \
--cli-binary-format raw-in-base64-out \
--body file://request.json \
output.json
cat output.json | jq -r '.content[0].text'The request format is identical—only the model ID changes. This is the value of a unified API: switch models by changing one parameter.
Amazon Nova Models
Amazon Nova uses a different request format. Create a Nova-specific request:
cat > nova-request.json << 'EOF'
{
"messages": [
{
"role": "user",
"content": [{"text": "Explain what an API is in one sentence."}]
}
],
"inferenceConfig": {
"maxTokens": 256,
"temperature": 0.7,
"topP": 0.9
}
}
EOFInvoke:
aws bedrock-runtime invoke-model \
--model-id us.amazon.nova-lite-v1:0 \
--region us-west-2 \
--content-type application/json \
--accept application/json \
--cli-binary-format raw-in-base64-out \
--body file://nova-request.json \
nova-output.json
cat nova-output.json | jq -r '.output.message.content[0].text'Note the different request and response structures. Each provider has their own format, though Bedrock normalizes the transport layer.
Other Useful Commands
Check your S3 buckets:
aws s3 lsGet information about the current region:
aws ec2 describe-availability-zones \
--query 'AvailabilityZones[].ZoneName' \
--output tableThese commands verify that your CLI is working and your credentials have appropriate permissions. We’ll use more specific commands as we create resources.
First EC2 Instance
We’ll launch an EC2 instance through the console, then connect and set it up manually. Before launching, we need to create a key pair and security group.
Tag all resources with ee547-demo1 so they can be found and cleaned up later. Where a resource has a Name field, use ee547-demo1 or a variant like ee547-demo1-sg. Consistent tagging is essential when working in shared accounts or running multiple projects.
Create Key Pair
Navigate to EC2 → Key Pairs → Create key pair.
Name: ee547-demo1-key
Key pair type: RSA
Private key file format: .pem (for OpenSSH on Mac/Linux) or .ppk (for PuTTY on Windows)
Click Create key pair.
The private key downloads automatically. Save this file—AWS doesn’t store it and you can’t download it again.
On Mac/Linux, set appropriate permissions:
chmod 400 ~/Downloads/ee547-demo1-key.pemSSH requires private keys to have restricted permissions (readable only by owner).
Alternatively, if you already have an SSH key pair locally, you can import the public key: EC2 → Key Pairs → Actions → Import key pair.
Create Security Group
Navigate to EC2 → Security Groups → Create security group.
Security group name: ee547-demo1-sg
Description: Security group for ee547 demo
VPC: Select the default VPC
Inbound rules → Add rule:
| Type | Port | Source | Description |
|---|---|---|---|
| SSH | 22 | 0.0.0.0/0 | SSH access |
Click Create security group.
Allowing SSH from 0.0.0.0/0 (anywhere) is acceptable for a demo. For production, restrict to your IP address or use a bastion host.
Launch Instance
Navigate to EC2 → Instances → Launch instances.
Application and OS Images (AMI)
Select Ubuntu from the Quick Start tabs, then choose Ubuntu Server 24.04 LTS (HVM), SSD Volume Type.
Note the AMI ID—it looks like ami-0abcdef1234567890. AMI IDs are region-specific; the same Ubuntu version has different IDs in different regions.
Instance Type
Select t2.micro.
- 1 vCPU, 1 GB memory
- Free tier eligible (750 hours/month for 12 months after account creation)
- Sufficient for this demo
t2.micro is the only general-purpose instance type covered by the free tier. The free tier provides 750 hours/month of t2.micro for 12 months after account creation. After that, or if you exceed 750 hours, standard rates apply (~$0.0116/hour in us-west-2).
Key Pair
Select ee547-demo1-key (the key pair created earlier).
Network Settings
Click Edit to expand network settings.
VPC: Use the default VPC (should be pre-selected)
Subnet: No preference (AWS chooses an availability zone)
Auto-assign public IP: Enable
A public IP is required to SSH into the instance from outside AWS. Without it, the instance is only reachable from within the VPC.
Firewall (security groups): Select existing security group
Select ee547-demo1-sg.
Configure Storage
1x 8 GiB gp3 — The default root volume is sufficient. gp3 is the current generation of general-purpose SSD storage.
Note the Delete on termination checkbox—when checked (default), the volume is deleted when you terminate the instance. For persistent data, you’d uncheck this or use separate volumes.
Advanced Details
Leave defaults for now.
Launch
Click Launch instance.
The instance enters “pending” state while AWS provisions it. Within a minute or two, it transitions to “running.”
Connect to the Instance
Once the instance is running, find its public IP address in the console (Instances → select your instance → Details tab → Public IPv4 address).
SSH into the instance:
ssh -i <key-file> ubuntu@<public-ip>The default username depends on the AMI. Ubuntu uses ubuntu, Amazon Linux uses ec2-user, Debian uses admin. Using the wrong username results in “Permission denied.”
The first connection prompts you to verify the host fingerprint:
The authenticity of host '54.x.x.x (54.x.x.x)' can't be established.
ED25519 key fingerprint is SHA256:xxxx...
Are you sure you want to continue connecting (yes/no/[fingerprint])?
Type yes to continue. This adds the host to your ~/.ssh/known_hosts file.
You’re now connected to your EC2 instance. The prompt shows ubuntu@ip-172-31-x-x, indicating you’re logged in as the ubuntu user on a machine with a private IP in the VPC’s address range.
Instance Setup
The base Ubuntu AMI is minimal. Install the tools we need:
# Update package lists
sudo apt-get update
# Install Python
sudo apt-get install -y python3
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
source ~/.local/bin/env
# Verify installation
python3 --version
uv --versionInstall Docker:
# Install prerequisites
sudo apt-get install -y ca-certificates curl gnupg lsb-release
# Add Docker's GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add Docker repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Add ubuntu user to docker group
sudo usermod -aG docker ubuntuThe group change requires a new login session to take effect. Log out and back in:
exit
# reconnect
ssh -i <key-file> ubuntu@<public-ip>Verify Docker:
docker --version
docker run hello-worldInstall the AWS CLI:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "/tmp/awscliv2.zip"
sudo apt-get install -y unzip
unzip -q /tmp/awscliv2.zip -d /tmp
sudo /tmp/aws/install
rm -rf /tmp/aws /tmp/awscliv2.zip
# Verify
aws --versionAWS CLI on EC2
Try running an AWS command on the instance:
aws sts get-caller-identityThis fails:
Unable to locate credentials. You can configure credentials by running "aws configure".
The instance has no AWS credentials. Unlike your local machine, there’s no ~/.aws/credentials file here.
You could run aws configure and enter access keys. That works, but consider the implications:
- Access keys would be stored on the instance in plaintext
- Anyone with SSH access could read them
- If the instance is compromised, the keys are compromised
- Keys don’t expire—a leaked key works until manually revoked
- You’d need to manage key rotation across all instances
This is a real problem. EC2 instances often need to call AWS APIs (read from S3, write logs, access databases), but embedding long-lived credentials is a security risk.
IAM Role for EC2
IAM roles solve the credential problem. A role is an identity with permissions but no permanent credentials. Instead, when something assumes a role, AWS issues temporary credentials that expire automatically.
For EC2, this works through instance profiles. An instance profile is a container that holds a role. When you attach an instance profile to an EC2 instance, the instance can assume that role and receive temporary credentials.
Create the IAM Policy
First, create a policy defining what permissions the role will have.
Navigate to IAM → Policies → Create policy.
Select the JSON tab and paste:
demo-permissions.json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BedrockInvoke",
"Effect": "Allow",
"Action": [
"bedrock:InvokeModel",
"bedrock:InvokeModelWithResponseStream",
"bedrock:ListFoundationModels"
],
"Resource": "*"
},
{
"Sid": "EC2Describe",
"Effect": "Allow",
"Action": "ec2:Describe*",
"Resource": "*"
},
{
"Sid": "STSIdentity",
"Effect": "Allow",
"Action": [
"sts:GetCallerIdentity",
"sts:GetSessionToken"
],
"Resource": "*"
},
{
"Sid": "IAMReadOnly",
"Effect": "Allow",
"Action": [
"iam:GetRole",
"iam:GetInstanceProfile",
"iam:ListAttachedRolePolicies"
],
"Resource": "*"
},
{
"Sid": "S3ListOnly",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "*"
}
]
}
This policy is broader than production best practices. It grants:
- Full Bedrock invoke access (any model)
- Read-only EC2 access (describe operations)
- STS identity operations
- IAM read for inspecting roles
- S3 bucket listing (not object access)
For production, use the principle of least privilege: grant only the specific permissions needed. AWS IAM Access Analyzer can help identify actually-used permissions after the fact.
Click Next.
Policy name: ee547-demo1-policy
Description: Permissions for EE547 demo EC2 instances
Click Create policy.
Create the IAM Role
Navigate to IAM → Roles → Create role.
Trusted entity type: AWS service
Use case: EC2
Click Next.
Search for ee547-demo1-policy and select it.
Click Next.
Role name: ee547-demo1-role
Description: Role for EE547 demo EC2 instances
Click Create role.
Attach Role to Running Instance
You can attach a role to an already-running instance. Navigate to EC2 → Instances, select your ee547-demo1 instance.
Actions → Security → Modify IAM role
Select ee547-demo1-role from the dropdown and click Update IAM role.
The role attachment takes effect within a minute or two. The instance doesn’t need to restart.
Verify Credentials
SSH back into the instance (if not already connected) and try the identity check again:
aws sts get-caller-identityNow it works:
{
"UserId": "AROAEXAMPLEROLEID:i-0abc123def456",
"Account": "123456789012",
"Arn": "arn:aws:sts::123456789012:assumed-role/ee547-demo1-role/i-0abc123def456"
}Notice the ARN shows assumed-role/ee547-demo1-role—the instance is using the role’s identity, not a user’s credentials. The suffix after the role name is the instance ID.
Try other commands:
# List EC2 instances
aws ec2 describe-instances \
--query 'Reservations[].Instances[].[InstanceId,State.Name]' \
--output table
# List Bedrock models
aws bedrock list-foundation-models \
--query 'modelSummaries[].[modelId,providerName]' \
--output table
# List S3 buckets
aws s3 lsThese work because the role’s policy grants these permissions.
How It Works
The instance obtains credentials through the instance metadata service (IMDS)—a special endpoint available only from within the instance at 169.254.169.254.
Modern EC2 instances use IMDSv2, which requires a session token:
# Get a session token (valid for 6 hours)
TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" \
-H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
# Query the role name
curl -H "X-aws-ec2-metadata-token: $TOKEN" \
http://169.254.169.254/latest/meta-data/iam/security-credentials/This returns the role name (ee547-demo1-role). Query further to get the actual credentials:
curl -H "X-aws-ec2-metadata-token: $TOKEN" \
http://169.254.169.254/latest/meta-data/iam/security-credentials/ee547-demo1-roleOutput (credentials redacted):
{
"AccessKeyId": "ASIA...",
"SecretAccessKey": "...",
"Token": "...",
"Expiration": "2025-02-04T12:00:00Z"
}The credentials include an expiration time—typically a few hours. The AWS SDK and CLI handle the token and credential refresh automatically. You don’t manage this; it happens transparently.
This is why no aws configure is needed. The SDK checks the metadata service automatically when running on EC2.
Why Roles Over Access Keys
We could have configured access keys on the instance instead. Both approaches provide credentials, but they have different security properties:
| Access Keys | Instance Role | |
|---|---|---|
| Storage | Plaintext on disk | Not stored; fetched on demand |
| Expiration | Never (until revoked) | Hours (auto-renewed) |
| Rotation | Manual | Automatic |
| Scope | Works anywhere | Only on that instance |
| Compromise impact | Long-term access | Access until instance terminated |
Instance roles don’t eliminate all risk—a compromised instance still has whatever permissions the role grants—but they reduce the blast radius compared to long-lived access keys.
Local Development
Before deploying to EC2, we’ll build and test the application locally. This is standard practice: develop and debug on your local machine where iteration is fast, then deploy to the cloud once things work.
Project Setup
Create a project directory:
mkdir bedrock-demo
cd bedrock-demoInitialize a git repository:
git initCreate the project files. The application is a Flask server that provides a web interface for invoking Bedrock models.
Application Code
Create app.py:
app.py
"""
Flask server for Bedrock demo.
Provides a simple API to invoke AWS Bedrock models.
"""
import json
import boto3
from flask import Flask, request, jsonify, send_from_directory
# =============================================================================
# Configuration
# =============================================================================
PORT = 5000
HOST = "0.0.0.0"
AWS_REGION = "us-west-2"
# Model configurations
# Each model has different request/response formats
MODELS = {
"claude-haiku-4-5": {
"model_id": "us.anthropic.claude-haiku-4-5-20251001-v1:0",
"name": "Claude Haiku 4.5",
"provider": "Anthropic",
"format": "anthropic"
},
"claude-sonnet-4-5": {
"model_id": "us.anthropic.claude-sonnet-4-5-20250929-v1:0",
"name": "Claude Sonnet 4.5",
"provider": "Anthropic",
"format": "anthropic"
},
"nova-lite": {
"model_id": "us.amazon.nova-lite-v1:0",
"name": "Nova Lite",
"provider": "Amazon",
"format": "nova"
}
}
DEFAULT_MAX_TOKENS = 512
# =============================================================================
# Flask App
# =============================================================================
app = Flask(__name__, static_folder="static")
# Initialize Bedrock client
bedrock = boto3.client("bedrock-runtime", region_name=AWS_REGION)
def build_request_body(prompt: str, model_format: str, max_tokens: int) -> dict:
"""Build the request body based on model provider format."""
if model_format == "anthropic":
return {
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": max_tokens,
"messages": [{"role": "user", "content": prompt}]
}
elif model_format == "nova":
return {
"messages": [
{"role": "user", "content": [{"text": prompt}]}
],
"inferenceConfig": {
"maxTokens": max_tokens,
"temperature": 0.7,
"topP": 0.9
}
}
else:
raise ValueError(f"Unknown model format: {model_format}")
def parse_response(response_body: dict, model_format: str) -> str:
"""Parse the response based on model provider format."""
if model_format == "anthropic":
return response_body["content"][0]["text"]
elif model_format == "nova":
return response_body["output"]["message"]["content"][0]["text"]
else:
raise ValueError(f"Unknown model format: {model_format}")
@app.route("/")
def index():
"""Serve the frontend."""
return send_from_directory(app.static_folder, "index.html")
@app.route("/api/models", methods=["GET"])
def get_models():
"""Return available models."""
models_list = [
{"id": key, "name": cfg["name"], "provider": cfg["provider"]}
for key, cfg in MODELS.items()
]
return jsonify({"models": models_list})
@app.route("/api/invoke", methods=["POST"])
def invoke_model():
"""Invoke a Bedrock model with the given prompt."""
data = request.get_json()
# Validate request
if not data:
return jsonify({"error": "Request body required"}), 400
prompt = data.get("prompt", "").strip()
if not prompt:
return jsonify({"error": "Prompt is required"}), 400
model_key = data.get("model", "claude-3-haiku")
if model_key not in MODELS:
return jsonify({"error": f"Unknown model: {model_key}"}), 400
max_tokens = data.get("max_tokens", DEFAULT_MAX_TOKENS)
# Get model configuration
model_config = MODELS[model_key]
model_id = model_config["model_id"]
model_format = model_config["format"]
try:
# Build request
request_body = build_request_body(prompt, model_format, max_tokens)
# Invoke model
response = bedrock.invoke_model(
modelId=model_id,
contentType="application/json",
accept="application/json",
body=json.dumps(request_body)
)
# Parse response
response_body = json.loads(response["body"].read())
result_text = parse_response(response_body, model_format)
return jsonify({
"response": result_text,
"model": model_config["name"],
"provider": model_config["provider"]
})
except bedrock.exceptions.AccessDeniedException as e:
return jsonify({
"error": "Access denied. Check IAM permissions for Bedrock.",
"details": str(e)
}), 403
except bedrock.exceptions.ValidationException as e:
return jsonify({
"error": "Model validation error. The model may not be enabled.",
"details": str(e)
}), 400
except Exception as e:
return jsonify({
"error": "Failed to invoke model",
"details": str(e)
}), 500
@app.route("/api/health", methods=["GET"])
def health():
"""Health check endpoint."""
return jsonify({"status": "healthy", "region": AWS_REGION})
if __name__ == "__main__":
print(f"Starting server on http://{HOST}:{PORT}")
print(f"AWS Region: {AWS_REGION}")
print(f"Available models: {list(MODELS.keys())}")
app.run(host=HOST, port=PORT, debug=True)
The server provides three endpoints:
GET /— Serves the static frontendGET /api/models— Returns available modelsPOST /api/invoke— Invokes a Bedrock model with the given prompt
Model-specific request/response formats are handled by build_request_body and parse_response. Adding a new model requires adding its configuration to MODELS and handling its format in these functions.
Frontend
Create the static directory and index.html:
mkdir staticstatic/index.html
<!DOCTYPE html>
<html lang="en" data-theme="light">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>AWS Bedrock Demo</title>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@picocss/pico@2/css/pico.min.css">
<style>
:root {
--primary-color: #232f3e;
--accent-color: #ff9900;
--success-color: #2e7d32;
--error-color: #c62828;
--border-radius: 8px;
}
body {
background: linear-gradient(135deg, #f5f7fa 0%, #e4e8ec 100%);
min-height: 100vh;
}
.container {
max-width: 900px;
padding-top: 2rem;
padding-bottom: 2rem;
}
header {
text-align: center;
margin-bottom: 2rem;
}
header h1 {
color: var(--primary-color);
margin-bottom: 0.5rem;
}
header p {
color: #666;
margin: 0;
}
.badge {
display: inline-block;
padding: 0.25rem 0.75rem;
background: var(--accent-color);
color: var(--primary-color);
border-radius: 999px;
font-size: 0.75rem;
font-weight: 600;
margin-left: 0.5rem;
vertical-align: middle;
}
.card {
background: white;
border-radius: var(--border-radius);
box-shadow: 0 2px 8px rgba(0,0,0,0.08);
padding: 1.5rem;
margin-bottom: 1.5rem;
}
.card-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 1rem;
}
.card-header h3 {
margin: 0;
color: var(--primary-color);
}
.model-select-wrapper {
display: flex;
gap: 1rem;
align-items: end;
}
.model-select-wrapper > div {
flex: 1;
}
.model-select-wrapper label {
font-weight: 500;
margin-bottom: 0.5rem;
display: block;
}
select, textarea {
border-radius: var(--border-radius);
border: 1px solid #ddd;
transition: border-color 0.2s, box-shadow 0.2s;
}
select:focus, textarea:focus {
border-color: var(--accent-color);
box-shadow: 0 0 0 3px rgba(255, 153, 0, 0.15);
}
textarea {
min-height: 120px;
resize: vertical;
font-family: inherit;
}
.button-row {
display: flex;
gap: 1rem;
justify-content: flex-end;
margin-top: 1rem;
}
button {
border-radius: var(--border-radius);
font-weight: 500;
transition: transform 0.1s, box-shadow 0.2s;
}
button:hover:not(:disabled) {
transform: translateY(-1px);
box-shadow: 0 4px 12px rgba(0,0,0,0.15);
}
button:active:not(:disabled) {
transform: translateY(0);
}
.btn-primary {
background: var(--accent-color);
border-color: var(--accent-color);
color: var(--primary-color);
}
.btn-primary:hover:not(:disabled) {
background: #e88a00;
border-color: #e88a00;
}
.btn-secondary {
background: transparent;
border: 1px solid #ddd;
color: #666;
}
.btn-secondary:hover:not(:disabled) {
background: #f5f5f5;
color: var(--primary-color);
}
.response-card {
display: none;
}
.response-card.visible {
display: block;
}
.response-meta {
display: flex;
gap: 1rem;
margin-bottom: 1rem;
flex-wrap: wrap;
}
.meta-item {
display: flex;
align-items: center;
gap: 0.5rem;
font-size: 0.875rem;
color: #666;
}
.meta-item .label {
font-weight: 500;
}
.meta-item .value {
background: #f0f0f0;
padding: 0.125rem 0.5rem;
border-radius: 4px;
}
.response-content {
background: #f8f9fa;
border-radius: var(--border-radius);
padding: 1rem;
white-space: pre-wrap;
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, sans-serif;
line-height: 1.6;
max-height: 400px;
overflow-y: auto;
}
.response-content.error {
background: #ffebee;
color: var(--error-color);
border: 1px solid #ffcdd2;
}
.loading {
display: none;
align-items: center;
justify-content: center;
gap: 0.75rem;
padding: 2rem;
color: #666;
}
.loading.visible {
display: flex;
}
.spinner {
width: 24px;
height: 24px;
border: 3px solid #e0e0e0;
border-top-color: var(--accent-color);
border-radius: 50%;
animation: spin 0.8s linear infinite;
}
@keyframes spin {
to { transform: rotate(360deg); }
}
.status-bar {
display: flex;
justify-content: space-between;
align-items: center;
padding: 0.75rem 1rem;
background: var(--primary-color);
color: white;
border-radius: var(--border-radius);
font-size: 0.875rem;
margin-bottom: 1.5rem;
}
.status-indicator {
display: flex;
align-items: center;
gap: 0.5rem;
}
.status-dot {
width: 8px;
height: 8px;
border-radius: 50%;
background: #666;
}
.status-dot.healthy {
background: #4caf50;
}
.status-dot.error {
background: var(--error-color);
}
.examples {
margin-top: 1rem;
padding-top: 1rem;
border-top: 1px solid #eee;
}
.examples summary {
cursor: pointer;
color: #666;
font-size: 0.875rem;
}
.examples-list {
display: flex;
flex-wrap: wrap;
gap: 0.5rem;
margin-top: 0.75rem;
}
.example-btn {
padding: 0.375rem 0.75rem;
background: #f0f0f0;
border: none;
border-radius: 999px;
font-size: 0.8rem;
cursor: pointer;
transition: background 0.2s;
}
.example-btn:hover {
background: #e0e0e0;
}
footer {
text-align: center;
color: #999;
font-size: 0.8rem;
margin-top: 2rem;
}
</style>
</head>
<body>
<main class="container">
<header>
<h1>AWS Bedrock Demo <span class="badge">EE 547</span></h1>
<p>Invoke foundation models through a unified API</p>
</header>
<div class="status-bar">
<div class="status-indicator">
<span class="status-dot" id="status-dot"></span>
<span id="status-text">Checking connection...</span>
</div>
<span id="region-display">Region: --</span>
</div>
<div class="card">
<div class="card-header">
<h3>Model Selection</h3>
</div>
<div class="model-select-wrapper">
<div>
<label for="model-select">Choose a model</label>
<select id="model-select">
<option value="">Loading models...</option>
</select>
</div>
<div>
<label for="max-tokens">Max tokens</label>
<select id="max-tokens">
<option value="256">256</option>
<option value="512" selected>512</option>
<option value="1024">1024</option>
<option value="2048">2048</option>
</select>
</div>
</div>
</div>
<div class="card">
<div class="card-header">
<h3>Prompt</h3>
</div>
<textarea
id="prompt-input"
placeholder="Enter your prompt here..."
></textarea>
<details class="examples">
<summary>Example prompts</summary>
<div class="examples-list">
<button class="example-btn" data-prompt="Explain cloud computing in one sentence.">Cloud computing</button>
<button class="example-btn" data-prompt="What are the three main cloud service models (IaaS, PaaS, SaaS)? Briefly describe each.">Service models</button>
<button class="example-btn" data-prompt="Write a Python function that calculates the factorial of a number.">Python code</button>
<button class="example-btn" data-prompt="What is the difference between SQL and NoSQL databases?">SQL vs NoSQL</button>
<button class="example-btn" data-prompt="Explain the CAP theorem in distributed systems.">CAP theorem</button>
</div>
</details>
<div class="button-row">
<button class="btn-secondary" id="clear-btn">Clear</button>
<button class="btn-primary" id="submit-btn">Send to Model</button>
</div>
</div>
<div class="loading" id="loading">
<div class="spinner"></div>
<span>Invoking model...</span>
</div>
<div class="card response-card" id="response-card">
<div class="card-header">
<h3>Response</h3>
</div>
<div class="response-meta" id="response-meta">
<div class="meta-item">
<span class="label">Model:</span>
<span class="value" id="response-model">--</span>
</div>
<div class="meta-item">
<span class="label">Provider:</span>
<span class="value" id="response-provider">--</span>
</div>
</div>
<div class="response-content" id="response-content"></div>
</div>
<footer>
<p>EE 547: Applied and Cloud Computing for Electrical Engineers</p>
</footer>
</main>
<script>
// =============================================================================
// Configuration
// =============================================================================
const API_BASE = ""; // Same origin
// =============================================================================
// DOM Elements
// =============================================================================
const elements = {
modelSelect: document.getElementById("model-select"),
maxTokens: document.getElementById("max-tokens"),
promptInput: document.getElementById("prompt-input"),
submitBtn: document.getElementById("submit-btn"),
clearBtn: document.getElementById("clear-btn"),
loading: document.getElementById("loading"),
responseCard: document.getElementById("response-card"),
responseContent: document.getElementById("response-content"),
responseModel: document.getElementById("response-model"),
responseProvider: document.getElementById("response-provider"),
statusDot: document.getElementById("status-dot"),
statusText: document.getElementById("status-text"),
regionDisplay: document.getElementById("region-display")
};
// =============================================================================
// API Functions
// =============================================================================
async function checkHealth() {
try {
const response = await fetch(`${API_BASE}/api/health`);
const data = await response.json();
elements.statusDot.className = "status-dot healthy";
elements.statusText.textContent = "Connected";
elements.regionDisplay.textContent = `Region: ${data.region}`;
} catch (error) {
elements.statusDot.className = "status-dot error";
elements.statusText.textContent = "Connection error";
elements.regionDisplay.textContent = "Region: --";
}
}
async function loadModels() {
try {
const response = await fetch(`${API_BASE}/api/models`);
const data = await response.json();
elements.modelSelect.innerHTML = data.models.map(model =>
`<option value="${model.id}">${model.name} (${model.provider})</option>`
).join("");
} catch (error) {
elements.modelSelect.innerHTML = '<option value="">Failed to load models</option>';
}
}
async function invokeModel() {
const prompt = elements.promptInput.value.trim();
if (!prompt) {
alert("Please enter a prompt");
return;
}
const model = elements.modelSelect.value;
const maxTokens = parseInt(elements.maxTokens.value);
// Show loading state
elements.submitBtn.disabled = true;
elements.loading.classList.add("visible");
elements.responseCard.classList.remove("visible");
try {
const response = await fetch(`${API_BASE}/api/invoke`, {
method: "POST",
headers: {
"Content-Type": "application/json"
},
body: JSON.stringify({
prompt: prompt,
model: model,
max_tokens: maxTokens
})
});
const data = await response.json();
if (!response.ok) {
throw new Error(data.error || "Failed to invoke model");
}
// Show response
elements.responseContent.textContent = data.response;
elements.responseContent.classList.remove("error");
elements.responseModel.textContent = data.model;
elements.responseProvider.textContent = data.provider;
elements.responseCard.classList.add("visible");
} catch (error) {
elements.responseContent.textContent = `Error: ${error.message}`;
elements.responseContent.classList.add("error");
elements.responseModel.textContent = "--";
elements.responseProvider.textContent = "--";
elements.responseCard.classList.add("visible");
} finally {
elements.submitBtn.disabled = false;
elements.loading.classList.remove("visible");
}
}
function clearPrompt() {
elements.promptInput.value = "";
elements.responseCard.classList.remove("visible");
elements.promptInput.focus();
}
function setExamplePrompt(prompt) {
elements.promptInput.value = prompt;
elements.promptInput.focus();
}
// =============================================================================
// Event Listeners
// =============================================================================
elements.submitBtn.addEventListener("click", invokeModel);
elements.clearBtn.addEventListener("click", clearPrompt);
// Submit on Ctrl+Enter
elements.promptInput.addEventListener("keydown", (e) => {
if (e.key === "Enter" && (e.ctrlKey || e.metaKey)) {
invokeModel();
}
});
// Example prompt buttons
document.querySelectorAll(".example-btn").forEach(btn => {
btn.addEventListener("click", () => {
setExamplePrompt(btn.dataset.prompt);
});
});
// =============================================================================
// Initialization
// =============================================================================
document.addEventListener("DOMContentLoaded", () => {
checkHealth();
loadModels();
});
</script>
</body>
</html>
The frontend uses Pico CSS for styling—a classless CSS framework that provides clean defaults with minimal markup. The JavaScript is vanilla (no frameworks) and handles form submission, loading states, and error display.
Dependencies
Create requirements.txt:
requirements.txt
flask>=3.0
boto3>=1.34
Supporting Files
Create .gitignore:
.gitignore
# Python
__pycache__/
*.py[cod]
*$py.class
.venv/
venv/
env/
# IDE
.vscode/
.idea/
*.swp
*.swo
# Environment
.env
.env.local
# AWS credentials (NEVER commit these)
credentials
*.pem
# OS
.DS_Store
Thumbs.db
# Output
*.log
output.json
Create README.md:
README.md
# AWS Bedrock Demo
Simple Flask application demonstrating AWS Bedrock model invocation.
## Usage
### Local Development
```bash
uv venv
source .venv/bin/activate
uv pip install -r requirements.txt
python app.py
```
Access at `http://localhost:5000`
### Docker
```bash
docker build -t bedrock-demo .
docker run -p 5000:5000 -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY bedrock-demo
```
### EC2 with Instance Role
No credentials needed if instance has appropriate IAM role attached.
```bash
python app.py
```
## Requirements
- Python 3.10+
- AWS credentials with Bedrock access
- Bedrock model access enabled in us-west-2
Run Locally
Create a virtual environment and install dependencies:
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
uv pip install -r requirements.txtRun the server:
python app.pyOutput:
Starting server on http://0.0.0.0:5000
AWS Region: us-west-2
Available models: ['claude-3-haiku', 'claude-3-sonnet', 'titan-text-express']
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:5000
Open http://localhost:5000 in your browser.
Select a model, enter a prompt, and click “Send to Model.” If your local AWS credentials have Bedrock permissions, you’ll see the model’s response.
The application uses boto3’s default credential chain. Locally, this finds credentials from ~/.aws/credentials or environment variables. On EC2 with an instance role, it finds credentials from the metadata service. The code is identical in both cases.
Commit
Add and commit the project:
git add .
git commit -m "Initial commit: Bedrock demo application"Docker Build
Create a Dockerfile:
Dockerfile
FROM python:3.12-slim
WORKDIR /app
# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application
COPY app.py .
COPY static/ static/
# Configuration
ENV PYTHONUNBUFFERED=1
EXPOSE 5000
CMD ["python", "app.py"]
Build the image:
docker build -t bedrock-demo .Run the container:
docker run -p 5000:5000 \
-e AWS_ACCESS_KEY_ID \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_SESSION_TOKEN \
bedrock-demoThe -e flags pass your local AWS credentials as environment variables. If you’re using a profile, export the credentials first or use a different approach.
Test at http://localhost:5000—it should behave identically to running directly with Python.
Stop the container with Ctrl+C.
Commit the Dockerfile:
git add Dockerfile
git commit -m "Add Dockerfile"Deploy to EC2
With the application working locally, we’ll deploy it to the EC2 instance: copy the files, install dependencies, and run.
Copy Files to EC2
Use scp to copy the project directory to the instance:
scp -i <key-file> -r bedrock-demo ubuntu@<public-ip>:~/The -r flag copies recursively (the entire directory). The files land in /home/ubuntu/bedrock-demo on the instance.
For larger projects or repeated deployments, rsync is more efficient—it only transfers changed files.
Install Dependencies on EC2
SSH into the instance, navigate to the project, and install dependencies:
cd bedrock-demo
uv venv
source .venv/bin/activate
uv pip install -r requirements.txtRun the Application
Start the server:
python3 app.pyThe server starts on port 5000. But if you try to access it from your browser at http://<public-ip>:5000, it won’t connect.
Open the Security Group
The security group only allows SSH (port 22). We need to add a rule for port 5000.
In the AWS console, navigate to EC2 → Security Groups. Find ee547-demo1-sg and select it.
Inbound rules → Edit inbound rules → Add rule
| Type | Port range | Source | Description |
|---|---|---|---|
| Custom TCP | 5000 | 0.0.0.0/0 | Flask application |
Click Save rules.
Now access http://<public-ip>:5000 in your browser. The application loads, and you can invoke Bedrock models.
The application works because:
- The EC2 instance has the
ee547-demo1-roleattached - The role has
bedrock:InvokeModelpermission - boto3 automatically finds credentials via the instance metadata service
- No credentials are stored in the code or on the filesystem
Run with Docker
Stop the Python process (Ctrl+C) and run the application in Docker instead:
docker build -t bedrock-demo .
docker run -d -p 5000:5000 bedrock-demoThe -d flag runs the container in the background (detached mode).
Verify it’s running:
docker psAccess http://<public-ip>:5000 again—same application, now running in a container.
When running in Docker on EC2 with an instance role, the container automatically has access to the instance’s credentials. The metadata service is accessible from within containers by default.
User Data: Automated Setup
The manual setup from earlier—installing Docker, Python, AWS CLI—took several minutes and multiple commands. User data automates this.
User data is a script that EC2 runs when an instance first boots. It runs as root before any user logs in. When the instance becomes available via SSH, the setup is already complete.
Create a New Instance with User Data
We’ll use “Launch more like this” to create a new instance based on our existing one, then add user data.
In the EC2 console, select your ee547-demo1 instance. Click Actions → Images and templates → Launch more like this.
This pre-fills the launch configuration with the same settings: AMI, instance type, key pair, security group. Make these changes:
Name: ee547-demo1-v2
Advanced details → IAM instance profile: Select ee547-demo1-role
Advanced details → User data: Paste the following script:
user-data.sh
#!/bin/bash
# =============================================================================
# EC2 User Data Script
# Bootstraps Ubuntu instance with Docker and Python dependencies
# =============================================================================
# Redirect all output to log file (and console)
exec > >(tee /var/log/user-data.log) 2>&1
set -e # Exit on error
echo "=== User data script started at $(date) ==="
# -----------------------------------------------------------------------------
# Configuration
# -----------------------------------------------------------------------------
APP_PORT=5000
PYTHON_VERSION="3"
# -----------------------------------------------------------------------------
# System Updates
# -----------------------------------------------------------------------------
echo "=== Updating system packages ==="
apt-get update
apt-get upgrade -y
# -----------------------------------------------------------------------------
# Install Docker
# -----------------------------------------------------------------------------
echo "=== Installing Docker ==="
# Install prerequisites
apt-get install -y \
ca-certificates \
curl \
gnupg \
lsb-release
# Add Docker GPG key
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpg
# Add Docker repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | \
tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker
apt-get update
apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Add ubuntu user to docker group
usermod -aG docker ubuntu
# Start Docker
systemctl enable docker
systemctl start docker
# -----------------------------------------------------------------------------
# Install Python and uv
# -----------------------------------------------------------------------------
echo "=== Installing Python and uv ==="
apt-get install -y python${PYTHON_VERSION}
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
# Make uv available system-wide
cp /root/.local/bin/uv /usr/local/bin/
cp /root/.local/bin/uvx /usr/local/bin/
# -----------------------------------------------------------------------------
# Install AWS CLI v2
# -----------------------------------------------------------------------------
echo "=== Installing AWS CLI v2 ==="
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "/tmp/awscliv2.zip"
apt-get install -y unzip
unzip -q /tmp/awscliv2.zip -d /tmp
/tmp/aws/install
rm -rf /tmp/aws /tmp/awscliv2.zip
# -----------------------------------------------------------------------------
# Setup Complete
# -----------------------------------------------------------------------------
echo "=== User data script completed at $(date) ==="
Click Launch instance.
Monitor Progress
The instance launches and runs the user data script. This takes a few minutes as it installs packages and downloads the AWS CLI.
Connect to the instance and watch the log:
tail -f /var/log/user-data.logWhen complete:
=== User data script completed at <timestamp> ===
Verify
docker --version
python3 --version
aws --version
aws sts get-caller-identityThe instance is ready. Docker is installed, credentials are available via the instance role, and no manual setup was required.
Deploy the Application
# From your local machine
scp -i <key-file> -r bedrock-demo ubuntu@<new-public-ip>:~/# On the instance
cd bedrock-demo
docker build -t bedrock-demo .
docker run -d -p 5000:5000 bedrock-demoThe security group already allows port 5000. Access http://<new-public-ip>:5000.
What Remains Manual
User data handles system setup, but application deployment still requires copying code and building the image. A production pattern would store pre-built images in a container registry (like Amazon ECR) and have the user data script pull and run the image directly.
Custom VPC
So far we’ve used the default VPC. Every AWS account has one per region, pre-configured with public subnets and an internet gateway. It works, but you have limited control over the network architecture.
A custom VPC lets you define:
- The IP address range (CIDR block)
- Subnet layout (public, private, across availability zones)
- Routing (what traffic goes where)
- Network isolation (separate VPCs can’t communicate by default)
Create VPC with the Wizard
Navigate to VPC → Your VPCs → Create VPC.
Select VPC and more — this creates the VPC along with subnets, route tables, and an internet gateway in one step.
Configuration
Name tag auto-generation: Enable, with prefix ee547-demo1
This names all created resources with the prefix: ee547-demo1-vpc, ee547-demo1-subnet-public1-us-west-2a, etc.
IPv4 CIDR block: 10.0.0.0/16
This gives you 65,536 addresses in the 10.0.x.x range. The default VPC uses 172.31.0.0/16—different ranges, no overlap.
Number of Availability Zones: 2
Number of public subnets: 2
Number of private subnets: 2
NAT gateways: None (saves ~$32/month; we don’t need private subnet outbound access for this demo)
VPC endpoints: None
Click Create VPC.
The wizard creates:
| Resource | Name | Purpose |
|---|---|---|
| VPC | ee547-demo1-vpc |
The network container |
| Internet Gateway | ee547-demo1-igw |
Connects VPC to internet |
| Public Subnet (AZ a) | ee547-demo1-subnet-public1-us-west-2a |
Instances here get public IPs |
| Public Subnet (AZ b) | ee547-demo1-subnet-public2-us-west-2b |
Redundancy in second AZ |
| Private Subnet (AZ a) | ee547-demo1-subnet-private1-us-west-2a |
No direct internet access |
| Private Subnet (AZ b) | ee547-demo1-subnet-private2-us-west-2b |
Redundancy in second AZ |
| Route Tables | ee547-demo1-rtb-public, -private1, -private2 |
Routing rules |
Examine the Resources
In VPC → Subnets, you can see the four subnets. Note:
- Public subnets have “Auto-assign public IPv4 address” enabled
- Private subnets do not
- Each subnet is in a specific availability zone
In VPC → Route tables, examine the public route table:
| Destination | Target |
|---|---|
| 10.0.0.0/16 | local |
| 0.0.0.0/0 | igw-xxx |
Traffic within the VPC (10.0.x.x) stays local. Everything else (0.0.0.0/0) routes to the internet gateway.
The private route tables only have the local route—no path to the internet.
Create Security Group
Security groups are VPC-specific. The ee547-demo1-sg we created earlier belongs to the default VPC and can’t be used in the new VPC.
Navigate to EC2 → Security Groups → Create security group.
Security group name: ee547-demo1-sg (same name is fine—they’re in different VPCs)
Description: Security group for ee547 demo
VPC: Select ee547-demo1-vpc
Inbound rules:
| Type | Port | Source | Description |
|---|---|---|---|
| SSH | 22 | 0.0.0.0/0 | SSH access |
| Custom TCP | 5000 | 0.0.0.0/0 | Flask application |
Click Create security group.
Launch Instance in Custom VPC
Navigate to EC2 → Instances → Launch instances.
Name: ee547-demo1-vpc
AMI: Ubuntu Server 24.04 LTS
Instance type: t2.micro
Key pair: ee547-demo1-key (same key works across VPCs)
Network settings → Edit:
- VPC:
ee547-demo1-vpc - Subnet:
ee547-demo1-subnet-public1-us-west-2a(either public subnet works) - Auto-assign public IP: Enable
- Security group: Select existing →
ee547-demo1-sg(the one in this VPC)
Advanced details:
- IAM instance profile:
ee547-demo1-role - User data: Paste the same script from earlier
Click Launch instance.
Deploy
Once the instance is running and user data completes:
# Copy application (from local machine)
scp -i <key-file> -r bedrock-demo ubuntu@<vpc-instance-ip>:~/SSH in and run:
cd bedrock-demo
docker build -t bedrock-demo .
docker run -d -p 5000:5000 bedrock-demoAccess http://<vpc-instance-ip>:5000.
The application runs identically. Same AMI, same user data, same IAM role, same application code. The only difference is the network it’s attached to.
Network Isolation
The instance in the custom VPC and the instance in the default VPC are in separate networks:
- Different IP ranges (10.0.x.x vs 172.31.x.x)
- No direct communication between them (by default)
- Separate security groups
- Could have different routing rules
This isolation is useful for separating environments (dev/staging/prod), applications, or security domains. Resources in one VPC can’t accidentally reach resources in another.
Cleanup
Terminate resources to avoid ongoing charges.
Find Resources by Tag
All resources created in this demo are tagged with ee547-demo1. Use this to find them.
EC2 Instances:
Navigate to EC2 → Instances. In the search bar, enter ee547-demo1. This filters to instances with that name tag.
Select all matching instances → Instance state → Terminate instance.
Terminating an instance:
- Stops the instance (no more compute charges)
- Deletes the root EBS volume (if “Delete on termination” was enabled, which is the default)
- Releases the public IP address
- Does not delete the security group, key pair, or IAM role
Security Groups:
Navigate to EC2 → Security Groups. Search for ee547-demo1-sg.
You’ll have two: one in the default VPC, one in the custom VPC. Select each and Actions → Delete security groups.
Security groups can only be deleted after all instances using them are terminated.
Key Pairs:
Navigate to EC2 → Key Pairs. Find ee547-demo1-key.
Actions → Delete. Also delete the .pem file from your local machine if you no longer need it.
IAM Resources
User:
Navigate to IAM → Users. Find ee547-demo1-user.
Select it → Delete. Confirm by typing the user name.
Deleting the user also deletes its access keys.
Role:
Navigate to IAM → Roles. Find ee547-demo1-role.
Select it → Delete. Confirm by typing the role name.
Policy:
Navigate to IAM → Policies. Find ee547-demo1-policy.
Actions → Delete. Confirm.
VPC Resources
The custom VPC and its associated resources persist until deleted.
Navigate to VPC → Your VPCs. Find ee547-demo1-vpc.
Actions → Delete VPC.
This deletes the VPC and all associated resources: subnets, route tables, internet gateway. It won’t delete resources that still have dependencies (like running instances or security groups with active references).
What Doesn’t Cost Money
Some resources exist but don’t incur charges when idle:
- Security groups — No charge
- Key pairs — No charge
- IAM roles and policies — No charge
- VPCs, subnets, route tables — No charge
- Internet gateways — No charge
These can be left in place if you plan to use them again. The cost comes from:
- Running EC2 instances — Per-second billing
- EBS volumes — Per GB-month (even if not attached)
- Elastic IPs — Small charge when not attached to a running instance
- NAT Gateways — Hourly charge plus data processing (we didn’t create one)
Verify
After cleanup, verify no unexpected resources remain:
# List running instances
aws ec2 describe-instances \
--filters "Name=instance-state-name,Values=running" \
--query 'Reservations[].Instances[].[InstanceId,Tags[?Key==`Name`].Value|[0]]' \
--output table
# List EBS volumes (should only show volumes attached to running instances)
aws ec2 describe-volumes \
--query 'Volumes[].[VolumeId,State,Size]' \
--output table