This mail showed up this afternoon within my inbox:
Dear REINER SADDEY,
Your volume experienced a failure due to multiple failures of the underlying hardware components and we were unable to recover it.
Although EBS volumes are designed for reliability, backed by multiple physical drives, we are still exposed to durability risks caused by concurrent hardware failures of multiple components, before our systems are able to restore the redundancy. We publish our durability expectations on the EBS detail page here (http://aws.amazon.com/ebs).
While it might raise your pulse, it didn’t raise mine: It’s now been just two weeks after having set up an Amazon EC2 instance – for the very first time in my life and mainly just for wetting my feet. 72 instance hours in total.
I thus considered the failure both an important warning and a welcome opportunity to practice how to restore an EBS based EC2 instance.
Starting to google for this particular task, I soon suspected it to be a c(o)urse of frustration: From next-to-impossible to contradicting statements and stories (you must create an Ami, no, you have to launch a new instance, this only works for Linux, a.s.o.), anything can be found, but no definite, convincing and authoritative recipe for restoring a Windows EC2 instance from a snapshot.
After having made up my mind to abandon further outside help, I chose to proceed exactly as I would like a restore to be performed in the first place. This is what worked for me targeting a single volume (boot-only) instance:
- Detached the failing volume from the EC2 instance
- Deleted the failing volume
- Created a new volume (within the same availability zone as my EC2 instance) from my snapshot
- Attached this volume to the EC2 instance using device name /dev/sda1
- Started the instance
- Retrieved my Administrator password (the cryptic one, must have changed it after taking the snapshot)
After having logged-in using RDP, I found all and everything exactly the same as at the time the snapshot was taken 🙂