The tool translates your site to simple static html files and also grabs all the images and what else you need:
Syntax to retrieve complete website:
wget -m http://site.com
wget --no-parent http://web.archive.org/web/20120626012526/http://www.site.com/
The way back code only gets the index.html file btw and whatever is present in that root.
I haven't figured out yet how to download the complete site from the wayback machine as all the guides out there don't work, they try to download whole the web archive, even after specifying that it should ignore the root.
You could try this or a variation of that.
wget -m --recursive --execute robots=off --no-parent http://web.archive.org/web/20120626012526/http://www.goreeinstitute.org/
wget -e robots=off --mirror --domains=staticweb.archive.org,web.archive.org http://web.archive.org/web/19970708161549/http://www.slackworks.com/
Anyway the first command works when you want to clone your own website, the one with the -m