Behind the scenes - Medicine trailer

In this article, I write about my experiences producing a trailer for the new Medicine Galleries when I worked at the Science Museum. To say that expectations around this £24 million project were high would be a bit of an understatement. How on earth do you convey the idea of more than 500 years of medical history and 3,000 objects in a 30-second trailer? You can imagine that with so many stakeholders, it was very important that everything was included. There were conversations about film crews and drones and all sorts. Time to take a step back. Colleagues across the museum had all sorts of content in the pipeline that would cover all the different galleries in detail. I needed to stick to my task which was to let people know that the galleries were open. Trouble is, they weren’t.

As anyone that works in arts marketing knows, the thing that you are trying to sell often doesn’t exist yet. The set has not yet been built for that theatre show. The choreographer is still developing the dance piece. The objects have not yet arrived in the museum. In the case of the Medicine Galleries, this was a multi-year construction project so it’s not done until it’s done. At the time I needed to create the trailer, access to the site was difficult. It was a PPE area back in the day when PPE meant hard hats and steel toecaps. Multiple contractors were doing multiple jobs in different places, all in a perfectly timed dance. The project management going on was phenomenal.

So, the place was basically a building site so any wide shots of galleries were out of the question. Physical access to the galleries was limited to certain, short, timeslots. What I had access to would be different on different days depending on what works were going on. Lighting was obviously going to be an absolute nightmare and there wasn’t going to be anywhere to put a camera on a tripod.

I decided that since gallery shots weren’t going to be an option, I’d have to go with an object-based approach. With over 3,000 of them to choose from, I needed to choose one that absolutely screamed MEDICINE! One of my favourite objects in the gallery seemed like the perfect choice - a model hospital! It’s absolutely beautiful and as you walk alongside it and peer into the windows, it feels like you’re in a Wes Anderson film. Even the colour palette of the object had a kind of Wes Anderson vibe to it. I thought it would be fun to try and steal this idea.

So, I had my idea but I didn’t have Robert D Yeoman on hand to help and nobody was going to set up dolly tracks in the galleries so I had to improvise. I had managed to organise a half-hour visit during which I’d have to get all the footage I’d need. The object was behind glass with reflections everywhere. Luckily, the object was plugged in so the lighting inside worked but this is controlled by several buttons, all of which light just one section of the object for a few seconds.

I knew there was no way I’d get a steady tracking shot with a video camera so I went armed with my mobile phone and worry about movement ‘in post’. By using a small, light phone camera, I was able to get right up to the glass case in a way that wouldn’t have been possible with a cinematography lens. This solved the problem of reflections but meant I couldn’t get all of the hospital in one shot. Even if I could have done, the resolution wouldn’t have been good enough to be able to zoom in as far as I wanted without it being blurry. So, I took lots of photos of different parts of the hospital with the idea of stitching them together afterwards. Now, bear in mind, this object is almost four metres long. I took almost 600 photos before my half-hour - and my phone battery - ran out.

Just 20 of the images I had to stitch together.

Just 20 of the images I had to stitch together.

Stitching the photos together was quite a task! There were about 80 or so photos that made it into the final composite. All of these had to be carefully placed using a reference image and colour corrected to counter the changing lighting conditions. I also had to do a bit of warping and stretching to make sure the perspective matched up in each room to give the effect I was after. I started with Photoshop but once the layers built up, it started to struggle so I switched to Affinity Photo which performed better. Fortunately, I had my own Macbook as the ancient Windows 7 museum computer wouldn’t have stood a chance. Eventually, I had a complete hospital.

Completed hospital composite

Completed hospital composite

Now I need to be able to turn my image into video content and start panning around it. My initial thought was to bring it into a video editor as a still and just pan across the image. Uh, uh uh… You see this image was big. Very big. 28,210 x 11,410 pixels big. That’s bigger than IMAX digital projection size - about the equivalent of 70mm film IMAX. I did try importing the image into Resolve (my editor of choice and there’s a free version available) but it wasn’t having any of it! Video editors just aren’t designed to work with this size of image so I needed a different approach.

Enter, Blender. Blender is a free, open-source piece of software that is a lot of things, including a video editor, but I didn’t use that functionality here. What I did use were its 3D capabilities. I was able to add my hospital image into a virtual world, add in a virtual camera and control the movement of that camera in a way that wouldn’t have been possible in real life, on-gallery. It didn’t matter that the hospital was a flat image and not a 3D model as it’d only ever be seen from one angle: straight on.

Screen grab of a virtual environment with the hospital picture and a camera pointing at it.

Screen grab of a virtual environment with the hospital picture and a camera pointing at it.

The difference in using this method is that the software only has to deal with the part of the image that’s visible to the virtual camera (if you use the right settings). That’s many times less data to deal with and it meant I could plan out my camera movements and play back in real-time. I could even adjust my virtual shutter speed, etc. to get a nice motion blur effect as the camera whip-pans across the rooms - nicely disguising the fact there is no change in perspective.

The next step was to bring the image sequence generated by Blender into Resolve to turn it into an actual video. Then choosing some stock audio (always the hardest part!) and adding in some text and a title treatment animation using Fusion (comes as part of Resolve and also free).

Making the title animation in Fusion

Making the title animation in Fusion

And that’s it! Here’s a quick breakdown of the resources used for the project.

  • Phone camera - I used a Pixel 3 but really whatever phone you have in your pocket would be fine. Nothing fancy.

  • Image editor - I ended up using Affinity Photo which is £47.99 with no ongoing license costs. You could equally use GIMP which is free open-source software.

  • 3d - Blender, free.

  • Editing/compositing/motion graphics - Resolve/Fusion Free version available that does everything you need.

  • Music licensing - as much as you want to pay. I think I used Audiojungle for this project but there’s many free and paid options available out there.

So basically, the main cost was time. Not just my time but the time of my colleagues who helped me with access to the object, those that put the object there in the first place, those that listened to my ideas and input into the creative process, those that scheduled the social media posts with the video content, the list goes on!

The final trailer is below for anyone that hasn’t seen it.

Note

This article is a collection of my own thoughts and experiences and is not affiliated with or endorsed by the Science Museum or any of the software products mentioned. I was employed at the Science Museum at the time of this project and all images/video are © Science Museum Group.