Later this month RetroWDW will release Horizons: Revisited, a restored and enhanced ride-through of the attraction featuring AI Interoplated HD video and an all-new stereo soundtrack (For details please see our Release Information & Schedule page). Normally behind-the-scenes details are released after the fact, but so much time and effort went into restoring and enhancing this attraction footage that we wanted you to read and see exactly what we did ahead of time. By having this knowledge before the release we feel you may appreciate the end result just a bit more… So sit back, grab a cup of joe and learn what was involved to bring you the cleanest and most incredible Horizons video of all time.
Horizons…where do we begin? Well, for one, I’m not about to spill out some soliloquy on why those that remember “peak EPCOT” place Horizons into the Best Attraction of All-Time category. However, I think we all can agree that the love and affection that Horizons has received from fans over the years has given the attraction an almost cult-like status. There are Facebook groups, online discussions forums, and many shared photos and videos; all seemingly with no end to the discussion and memories. Here at RetroWDW, we are happy to be part of keeping those memories alive with everything from our photo galleries to our 3-part podcast. There are so many things to discuss and see!
One of our first video projects was to take one of MartinsVids of Horizons and enhance it. He had a very clean PAL (European video format) video that he shot of Horizons. Due to the nature of the higher frame rate and the exposure we were able to produce one of the cleanest looking and most watchable ride-through videos of Horizons. It was released in 2014 under the title Restored Horizons POV Ride Through – Go Sniff an Orange and Watch This. Since its release we’ve always been on the hunt for even more footage of Horizons, even looking for film (the true holy grail)!
Our very own How Bowers visited EPCOT Center in the late 90s. Knowing that the attraction would eventually close, he armed himself with an 8mm video camera, a tripod, wide-angle lens, microphones, pickup-coils, and his infamous binaural sound recording device, dubbed “The Head”. All existing Horizons ride-throughs, up until this point, have been sourced from cameras with built-in general-purpose lenses and microphones. These are great for shooting Little Sally’s birthday party, but when faced with capturing a monster-sized attraction set, it only captures a small fraction of what can be seen. Here is a comparison with Martin’s Video on the left, and our wide-angle lens footage on the right.
1. Digital Transfer – The first step is transferring the analog videotape into the computer. This process takes as long as the footage runs, that is, if How shot one hour of footage it will take one hour to transfer it into the system. The resulting file has a screen resolution of 640×480, or the number of lines for an old CRT television set! Today’s TVs and cameras operate at resolutions of 1920×1080 (HD), 3840×2160 (4k) and even higher! How recorded nearly 2 hours of Horizons footage! From this non-stop wide-angle lens ride-through to zoomed-in details, screens, audio-animatronics and more.
2. Deinterlacing, Stabilization and Noise Reduction – Through various filters and programs, we digitally manipulate the footage to deinterlace, stabilize, reduce video noise, and make any luma/chroma adjustments. De-interlacing is the process of taking footage that was designed for older televisions and making it compatible with today’s devices. The frames in older videos, while having 480 lines of resolution, are made up of two frames of 240 lines – each alternating back and forth or “interlaced” together. Your eye sees one frame, however on the first pass the TV displays all the odd number lines and on the second pass displays the even number lines. These two passes make up 1 frame of video. When you try to do this on today’s devices you get noticeable artifacts, alternating lines on edges and degraded viewing experience.
You’ll notice in some scenes, where How is walking with the camera, that the frames jump all over the place. This is due to the stabilization; the computer has adjusted the frames to keep more of the content centered on the screen. There is a balance between saving rare footage, making it somewhat watchable, or just throwing it out. We went for the first option. We’d rather you see the interior of the queue than just throw it away. So while it may be a bit jumpy, it is there for historic preservation. The image on the right is after we’ve applied digital noise reduction.
3. Artificial Intelligence Upscaling – When we transferred the footage from the videotape, we only received 480 lines of information because that was all the camera was capable of recording. Normally, after enhancing the video, one saves it and publishes it to YouTube and lets the user’s phones or TVs (which have 1080 or more lines of resolution) “interpolate”, or “make up”, the information differences between 480 lines and 1080 lines. The result is a soft, often pixelated and blurred, picture that isn’t great because it doesn’t add detail to the image. It is just filling in the gaps.
What if we could interpolate ahead of time so that the devices didn’t have to work as hard? This is where upscaling comes into play. Upscaling pre-determines the information between 480 and 1080 lines and has been a technique that video enthusiasts have used for years to “create” HD content. The problem is it is not true HD content. There are many videos on YouTube claiming to be “Horizons 1080p HIGH DEFINITION footage”; there is no such thing and never will be unless someone has film footage. “But wait!” You say “This is purported to be an HD version of Horizons!” That is correct, and it is HD, but it is interpreted HD, let us explain.
As we previously noted a phone and TV can upscale video rather quickly. They create missing information with very fast, yet minimalistic, functions that can compute the gaps in nanoseconds. 4k capable devices have to create (read: make up) nearly 240,000,000 additional pixels every second! Now, what if the time was taken to properly upscale the image, and it took minutes per frame rather than a split second? What if one threw in some artificial intelligence, machine learning, and neural networks and the results could be above and beyond anything seen before?
So how does this work? Imagine a pair of photos of a building; one photo is a super sharp modern image, whereas the other is an older low-resolution photo. A neural network analyzes thousands of these known photo pairs to learn how details usually get lost. The machine learns to “fill in” information in new images based on what it has learned in comparing all the input images. The more pairs you feed it, the smarter it becomes. Not only can it start to recognize basic shapes, but also buildings, how clouds should look, sharp angles, people’s faces, color gradients, etc.
It took over ten hours to use machine learning software to upscale our Horizons footage to 1080 lines of resolution. The process created 322 gigabytes of data spanning over 40,000 individual still images; these were then stitched back together to remake the video. The original video was 640×480. We now have a final video that is 1440×1080 and it has details intact! So how do we know it succeeded? We’ll let the comparison photos speak for themselves, while not perfect it is a huge improvement:
Comparison Photos (Left Original, Right Enhanced)
4. Fish-Eye Removal – One downside to shooting footage with a wide-angle lens is there is the “fish-eye” effect. That is, objects on the edges of the screen will look curved as if you’re viewing the scene through a very large marble or the eye of a fish. We can use software to “un-wrap” the footage around that marble and lay it flat again. In the image to the right, note the poles holding up the canopy and the door opening. In the original frame, they curve outward. After removing the fish-eye distortion, it is greatly reduced. Also, notice the proportions of the audio-animatronic’s figure has been restored. There is a balance of removing the curvature of the footage and keeping things in-frame and visible. Apply too much distortion removal and you lose the benefits of the wide-angle lens!
5. Blurring and Sharpening – Believe it or not, the process of using AI to upscale can actually lead to some areas with inconsistent levels of sharpness. While it may look good in one frame, when you stitch all the frames together and play them as a movie, there is an odd sharp/sparkle effect. To minimize this, we blur the details a bit and then re-sharpen that blurred image to get some of the detail back. It sounds counterintuitive, but it works.
6. Editing and Color Correction – Now that the enhanced video has been created, it is time to adjust the color. We broke up the attraction into individual scenes and then adjusted each for saturation, vibrancy, contrast and a host of other settings. The transitions from scene-to-scene needed to have cross-fades added as not to make any glaringly obvious abrupt color changes.
7. Audio Track & Images – On top of the video, How visited the attraction numerous times to record the in-ride audio track. On three different occasions, he rode with two microphones; one facing the scenes, while the other captured the narration. On another ride through, How used a telephone pick-up coil on the speakers to obtain a decently clean copy of just the narrators; complete with the hiss and pops of the original playback devices. This audio was sent off to a professional for cleaning and restoration; we just didn’t have the expertise or tools in-house to give the care it needed. The result was nothing short of incredible!
We also have the musical soundtrack from the original recording tapes. It has always been on our to-do list to remix high-quality audio with the video to create the most realistic in-ride experience. Now we have all the ingredients!
To accomplish the remix, we watched numerous Horizons videos to understand the timing and pacing of the music and narration. Scene by scene, we added in the narration, the background music, and the appropriate ambient sound effects; including audio-animatronics. Care was taken to fade out each track to match with the next; avoiding any abrupt cuts or changes. Despite all the sources, certain scenes didn’t have viable audio. In these cases, care was taken to edit together the required audio from a selection of tracks and blend as seamlessly as possible.
8. Deflicker Movie Scenes – You have probably noticed that when a movie is recorded by a video camera, the screen flickers. Most commonly, the film is being projected at 24 frames per second and the video camera is recording in 29.97 frames per second. This means that the video camera will pick up all 24 projected frames, but the additional 5.97 frames will be recorded when there is no image on the screen resulting in dark frames. This phenomenon makes the recorded video flicker and can be distracting to viewers. For this project, we utilized software that analyzed the flicker and repaired the darkened frames. While not perfect, it does improve the image quality and improves the viewer experience.
9. Addition of In-Ride Images – We added attraction photos from How Bower’s personal collection to round out the video and help with telling the story of the queue. These images only appear during the queue portion of the video and at the end of the ride, as we wanted the ride-through to be completely uninterrupted.
10. Review and Edit – After each major edit the entire video was exported out and shared with volunteer editors. Their job was to meticulously watch the footage; annotating blips, mis-edits, poor transitions, color issues, sound problems, you name it, they were told to find it. For nearly four weeks we went over countless errors and recommendations. From there we fixed the edits and modifications, some included completely re-restoring and enhancing certain scenes – that meant going all the way back to the original source footage and starting from scratch.
11. Bringing it all Together – After hours of editing and restoration, the video below will give you a glimpse of what went into this restoration. FYI this is best watched on a computer, television or tablet. If you are using your phone please watch in full-screen mode to see all the details.
A big thank you to How Bowers for supplying the source video and audio, Paul Durso for his audio restoration, Jim Leemhuis as our QA Engineer and our reviewers: Matt Fusfield, George Lenahan, Gary Sullivan, & Martin Smith!