Sitemap

Automated Migration of Amazon EBS Volumes from gp2 to gp3 Using Python and Boto3

4 min readAug 19, 2023

As a DevOps engineer working extensively with EC2 volumes that demand high throughputs, you’re likely familiar with the cost implications and performance considerations that come along. If you’re seeking ways to optimize your resources, then this article is tailor-made for you. In the next two minutes, we’ll delve into an automated approach to migrate from gp2 to gp3 Amazon Elastic Block Store (EBS) volumes using Python and the boto3 library.

Prerequisites:

Before diving into the solution, ensure you have the following prerequisites in place:

EC2 Full Access Role: You need to possess the necessary EC2 full access role to manage and modify EBS volumes.

AWS CLI with Default Profile: Set up the AWS Command Line Interface (CLI) with a Default profile configured to facilitate seamless interaction with AWS services.

Advantages of Migrating to gp3

Reduced Costs: One of the most compelling reasons to migrate is the immediate cost reduction on your monthly billing. The gp3 tier is designed to be more cost-efficient compared to gp2.
Advanced Support: AWS’s support for gp3 is expected to outlast gp2’s, making it a strategic move for the future. As AWS pushes towards wider adoption of gp3, transitioning becomes essential.
Uncompromised Performance: While transitioning, your throughput remains unaffected, ensuring that your applications continue to perform optimally.

The Automated Migration Process:

The automated migration process outlined here leverages Python and the boto3 library, a powerful toolkit for interacting with AWS services programmatically. This approach significantly streamlines the migration and minimizes manual intervention.

To execute the migration, follow these steps:

Install Boto3: If you haven’t already, install boto3 using pip:

```
pip install boto3
```

Implement the Script: Craft a Python script that utilizes boto3 to automate the migration process. The script will identify gp2 volumes, modify them to gp3, and handle the intricacies of the migration process.

import csv
import boto3


def get_ec2_volumes(file_name, filter, aws_region):
ec2 = boto3.client('ec2', aws_region)
paginator = ec2.get_paginator('describe_volumes')
paginationConfig_ = {'MaxItems': 500, 'PageSize': 500}
response_iterator = paginator.paginate(
Filters=filter, PaginationConfig=paginationConfig_)
volumes = []
for page in response_iterator:
volumes_result = page['Volumes']
for volume in volumes_result:
if 'Throughput' in volume:
throughput_ = volume['Throughput']
else:
throughput_ = None

volumes.append([volume['VolumeId'],
volume['VolumeType'],
volume['CreateTime'],
volume['Iops'],
volume['State'],
volume['Size'],
throughput_])

with open(file_name, "w", newline="") as file:
writer = csv.writer(file)
# Write Header
writer.writerow(
[
"VolumeId",
"VolumeType",
"AWS-CreateTime",
"Iops",
"State",
"Size",
"Throughput"
]
)
# Write volumes list
writer.writerows(volumes)
print("EC2 gp2 resources size: " + str(len(volumes)))
return volumes


def modify_volume_gp3(volume_ids, aws_region):
ec2 = boto3.client('ec2', aws_region)
modify_response = {}
error_log = []
for volume_id in volume_ids:
try:
response = ec2.modify_volume(VolumeId=volume_id, VolumeType='gp3')
response = []
modify_response += response
# print(response)
except Exception as e:
print("exception to modify volume: " + volume_id)
error_log.append('Exception is here: ' + str(e))
print('Exception is here: ' + str(e))
# modifiled volumes will be tracked below
with open("AWS_Scripts/modified_volumes.json", "w", newline="") as mod_file:
writer = csv.writer(mod_file)
writer.writerows(modify_response)
# The volumes that cannot be modifiles will be written into below file.
with open("AWS_Scripts/ec2_modify_error_log.csv", "w", newline="") as err_file:
writer = csv.writer(err_file)
writer.writerows(error_log)
return


if __name__ == "__main__":
try:
aws_region = "us-west-2"
filter = [{'Name': 'volume-type', 'Values': ['gp2']}]
# File to save list of gp2 volumes
file_name = "AWS_Scripts/gp2_volumes_list.csv"
# get gp2 columes list
volumes = get_ec2_volumes(file_name, filter, aws_region)
gp2_volume_ids = []
for volumn in volumes:
gp2_volume_ids.append(volumn[0])
# Modify gp2volumes to gp3
modify_volume_gp3(gp2_volume_ids, aws_region)
except Exception as e:
print(e)
raise e

Execute the Script: Run the Python script to initiate the migration. The script should iterate through all your regions and accounts, making the process scalable across multiple setups.

Important Considerations:

  1. Approval and Change Management: Before implementing this migration in a production environment, seek approvals from relevant authorities. Even though this migration is low-risk and downtime-free, adhering to change management processes is crucial.
  2. Patience with Volume Changes: Keep in mind that once you modify the volume type, you need to wait at least 6 hours before reverting the changes or making further modifications.

Conclusion:

Migrating your Amazon EBS volumes from gp2 to gp3 is a strategic move that offers cost savings, long-term support benefits, and unaltered performance. By automating this migration using Python and boto3, you can ensure efficiency and scalability across regions and accounts. Remember to follow approval processes and maintain patience during the transition.

To embark on this journey of optimization and future-proofing, refer to the comprehensive guide provided by AWS in their blog post: [Migrate Your Amazon EBS Volumes from gp2 to gp3](https://aws.amazon.com/blogs/storage/migrate-your-amazon-ebs-volumes-from-gp2-to-gp3-and-save-up-to-20-on-costs/). Happy migrating!

--

--

Sai Teja Makani
Sai Teja Makani

Written by Sai Teja Makani

Senior Manager, DevOps. Blockchain enthusiast, Data Engineer and Google Ads API specialist.

Responses (1)