More and more programmes and live televised events are leveraging social media, specifically Twitter, to provide a second screen experience during the broadcast. It’s a powerful way for broadcasters to engage viewers with interaction and also to generate a sense of group participation in an event. Unfortunately this experience is lost if the viewer records the programme and watches it later. This post briefly outlines three suggested methods of recreating a form of second screen experience for offline viewing. The last of these requires no changes to viewer’s equipment and only minor changes at the broadcast end. This final method is developed in the second part of this post.
Methods to preserve a dual screen experience
Software in the player can retrieve historic tweets from Twitter on demand when playback is initiated and recreate the stream on the television or second screen device. This could be achieved independently of the broadcaster by TV and STB manufacturers. It’s also possible for a 3rd party developer to enable this functionality on devices capable of running apps, such as Google TV. By keeping the source of the tweets with Twitter this method would also avoid many of the editorial issues associated with user generated content as it is decoupled from the broadcaster.
The broadcaster can insert the Twitter stream into the MPEG metadata of the broadcast stream that a player can make available on the television or second screen device. This would require work by the broadcaster to make the streams available as well as cooperation with TV and STB manufacturers to decide on a common format.
The broadcaster can insert tweets into a subtitle track in the broadcast stream. This could be achieved using current live subtitling systems at the broadcast end and accessed on existing viewer equipment with no modification.
This third method is developed below to give an indication of the viewer experience.
Tweets embedded in the subtitle stream
Two experiments have been made using this technique with an episode of a soap opera (Eastenders) and a live sports event (the Queen’s Club tennis semi final). The Tweets were collected as each programme was being broadcast using a simple PHP script then written out to a subtitles file in .srt format using the timecodes provided from the Twitter feed. This was then used to insert a subtitle track into the original programme MPEG stream. For the purpose of this demonstration this was done offline, but in a production system this could be achieved using the same workstream as for live programme subtitling.
With the soap episode the number of tweets was low throughout the programme so there was no contention for the limited television screen real estate. However the delay between events on the screen and tweets relating to it appearing could be over a minute later. This was confusing as the expectation with subtitles is that they are immediate. This effect is exacerbated due to the short scenes lengths in Eastenders.
Tweets on rapidly changing content tend to be out of context
With the live sports event there was no shortage of material. In fact this was the biggest problem. A television doesn’t have the screen size or resolution to support such a large browsable list of content and still be viewable at a distance, so it has to be reduced to a manageable volume. This could be achieved by tighter message filtering, manual curation or simply dropping excess messages.
Tweets on live sports events are less sensitive to time delays
An example video can be seen below:
There are a number of non-technical issues to consider:
Editorial: Does the broadcaster/programme maker want to associate themselves with the content of the messages? As with all user generated content it can be unpredictable if used raw.
Rights: Who owns the content of the messages? While it might be acceptable to associate a programme with a hashtag and encourage viewers to interact using it, blindly re-streaming it would probably be unacceptable to Twitter.
Legal: This is related to the rights issues – does inclusion of messages in the broadcast stream, by whatever means, constitute publishing and so assume the liability for the content of the messages?
These issues are not that much different from those encountered with other user generated content and so in most cases a curated stream would overcome these points.
NOTE: This project is currently offline due to the impracticality of having wires permanently trailing through the living room.
This is a project that makes your photos look like an old television, but by using a real old television rather than relying on digital filters. Once you upload your picture it’s displayed and photographed on an old television set located somewhere in South West London and then returned to you in all its distorted glory. It’s the nearest thing you can get to handmade in the world of digital delivery!
It’s inspired by the great InstaCRT project which used a DSLR to take pictures of an old videocamera viewfinder to create interesting photo effects. Unfortunately the mobile app which accompanied that project is for iPhone only and as an Android user I was unable to use it. Then I recently became the lucky owner of a Raspberry Pi and was looking for a little project to evaluate it. I had an old Canon A70 digital camera and a splendid Rigonda VL-100 television set lying around, so I set about creating my own photo to telly converter.
The system can produce monochrome images directly or colour images by combining three separate images of the image RGB components. This video illustrates the process required to produce a colour image. However the app only processes monochrome images, partly because they don’t take as long to prepare but also because the idea here is to capture the distortion caused by physical effects and to minimise digital processing.
Some image artifacts to look out for
Blooming: Images bow outwards in bright areas. This is caused by the increased beam current causing the tube HT voltage to drop.
Visible retrace lines: The beam fails to shut off completely during the vertical retrace period. This set is particularly prone to this, especially noticeable in dark images.
Poor dynamic range: Highlights are washed out, shadows lack detail.
Smeared vertical edges: Not so sure where this comes from. Could be the video drive transistors turning off too slowly or the effect of my composite video input bodge (see below).
The mobile app uploads the selected image to a directory on an FTP server. A shell script running on the Pi scans this directory every minute and downloads any new images. If it finds any it turns the TV on and waits for it to warm up. Then it displays and photographs each image in turn and uploads them back to the FTP server. It then sends an alert to the mobile app using Google Cloud Messaging. When the mobile receives this message it downloads the processed image from the FTP server, saves it to the device’s gallery and informs the user with a notification.
Now lets look at the system components in more detail.
The Rigonda VL-100 was manufactured in the 70s by Mezon Works in Leningrad and I picked this one up in the early 80s for £5. It’s cluttered up my parent’s house for the last 30 years (sorry Mum) and so it was great for everyone that I could find a project to use it.
The TV only has an RF input and the tuner’s frequency control is pretty poor making it difficult to rely on it to stay tuned over a long period of time. So I added a composite video input for the set, capacitively coupling the composite signal into the receiver just after the AM detector diode. This gives a reasonable picture which is unaffected by the frequency drift of the tuner.
The TV is powered by 12v from an old PC power supply and is turned on and off by a relay controlled by one of the GPIO lines of the Pi. This allows the Pi to only turn the TV on when there are pictures to process. If no new pictures are received within 10 minutes the Pi turns the TV off and waits 15 minutes before scanning for more images so as not to stress the TV tube by repeated power cycling. Unfortunately this means that sometimes it might take several minutes to process an image but hopefully this strategy will prolong the tube’s life.
This is a Canon A70 which I rescued from a dustbin at my former employer in about 2005. It gave sterling service for many years, but like so many A70s it suffered from bad connections that made it too unreliable for everyday use. However it does support remote control over USB using gphoto2 allowing the Pi to take and download pictures automatically. Unfortunately the camera can’t engage macro mode under USB control so it’s set up at the minimum distance from the TV that still allows it to stay in focus:
This results in an image of the screen together with the surroundings, which is then cropped by the Pi using Imagemagick to give the final picture:
The Pi used is a model B. It’s connected to the network via the on board Ethernet connector and to the camera via one of the two USB ports. Being Linux based it’s simple to install any drivers and packages required, making it a very powerful development platform.
The Pi has a composite video output which makes it simple to connect to the TV. All that needs to be done for the display is to set the Pi’s video standard to 50Hz/PAL which can be done by uncommenting the line:
in /boot/config.txt and restarting.
GPIO setup and relay control
GPIO port 17 is used to control the TV power relay. The GPIO line is set up in the main control script by creating a file called setup_gpio.sh with the contents:
#prepare GPIO17 for output
echo "Setting up GPIO for output..."
echo "17" > /sys/class/gpio/export
echo "out" > /sys/class/gpio/gpio17/direction
chmod 666 /sys/class/gpio/gpio17/value
Just look at that pincushioning – those shelves are supposed to be parallel!
Once this is working, it’s a simple step to process colour images by taking three separate pictures of the red, green and blue channels and then combining them. First split the original image in to three channel images:
convert img.jpg -channel R -separate r.jpg
convert img.jpg -channel G -separate g.jpg
convert img.jpg -channel B -separate b.jpg
Once they’re photographed and cropped they can be combined to produce a single full colour image. The resulting pictures have an odd quality because unlike pictures of a conventional colour CRT, there’s no shadow mask or discrete phosphor dots. The pictures appear much more photographic.
This is a simple Android app which allows the user to take a photo using the camera, or choose an image from the gallery. Once chosen the image is cropped to a 4:3 aspect ratio and uploaded to the FTP server. When processing is complete the server alerts the device with a GCM message which causes it to download the processed image from the FTP server. It’s been tried on a few devices but testing has by no means been exhaustive. Your mileage may vary. Known issues:
Sometimes the notification message can take a long time to arrive after it’s sent. If the notification doesn’t arrive to tell you your image is ready then click the ‘Browse your pictures…’ button in the app to see a list of your processed images.
You are identified to the server by a unique ID that’s generated when you first run the app. Uninstalling or clearing the app data will delete this ID and any pictures that haven’t been downloaded will then be inaccessible.
Processed images are saved at the back of the device gallery making them difficult to find.
The app relies on the device having the ability to crop images. If that’s not supported it won’t work. The Motorola Droid appears to crop but doesn’t return the image, instead setting the device wallpaper to the selected image.
There are several parts of the system that could do with improvement:
Polling an FTP server isn’t very efficient. Allowing the server to scp files direct to the Pi would be better.
This setup isn’t very scalable. It takes about a minute to process a monochrome image, longer for colour. Using a video camera instead of a digital camera could conceivably get the throughput to maybe 10fps. You’d need something with more power than a Pi to keep up with that though!
November 16, 2012
by Mark Longstaff-Tyrrell 1 Comment
In the late 70s and early 80s Tony Hart presented a children’s art and design programme on BBC1. The studio set was a sprawling warehouse loft with huge skylights and windows overlooking the city. His enthusiasm and imagination had us all enthralled and ensured that he and the programme achieved cult status. Watching some episodes recently I noticed he also had a kicking soundtrack. Apart from the well documented use of ‘Left Bank Two’ and ‘Cavatina’, such avant garde tracks as ‘Network 23′ by Tangerine Dream and Brian Bennett’s ‘Alto Glide’ also make an appearance. But there are others that even Shazam cannot recognise. They’re probably library tracks but nonetheless are very well curated, ranging from rock, funk, reggae and ambient electronica. Here’s a selection, along with the sounds of art being made in the background. They make for an extremely chilled afternoon’s listening. Definitely worth a 30 year anniversary issue on BBC Records and Tapes.