Read Latest Issue now

Start-up: Pixseller, UK, 2017

For Dr Sepi Chakaveh, a data science lecturer at the University of Oxford, Pixsellar is an accumulation of all her web streaming and AI research, compressed into a handful of technologies

These technologies – which span from adtech to emotional recognition engines – promise to be as cutting edge as her career to date. 

Chakaveh spent a decade working as an astronomer before switching fields to IT, where she worked for German research organisation Fraunhofer, developing a part of the internet for Eastern Europe in the early nineties. Later, she became part of a research group building edutainment applications and virtual learning environments, exploring various different ideas about how to use VR and AI.

She recalls that, during this time, she created the first online museum for the Beethoven-Haus, “which remains pretty much as I created it”. In 2004, she developed a 3D avatar mapped on the bone structure of Marilyn Monroe that read news from the financial markets, responding to the TV remote control in real time. “That was pretty revolutionary for even 15 years ago – and her accent changed depending on which territory she was reporting on,” she reflects.

After a move to the UK, she taught biometrics and data science at the Universities of Hertfordshire and then Southampton, where she founded the college’s Data Science Academy, first initiated by Tim Berners-Lee, an SU resident professor.

Now based at the University of Oxford in a part-time position, she has become the sole founder and operator of Pixsellar – and Chakaveh has three technologies she’s hoping to find partners for. 

The first is iPixsellar – a live, AI-powered subtitle-generation tool, the free version of which launched in February. A demo on the company’s website shows how it allows users to go anywhere online – from a real-time Skype conference call to a live Sky News feed to a foreign language film and watch it in any language.

Right now, the transcript is generated in about 20 seconds and covers any one of 100 languages. According to Chakaveh, there’s an audio version in the pipeline, “so if you’re having a teleconference, you’ll just be able to hear what’s being said in your own language”.

Chakaveh is also working on a professional version of the product for content makers – producers of e-learning content who want to extend their reach and teleconferencing companies.

Another product generating excitement is her emotional recognition engine, Falcon. There’s a demo on the company’s site that works best on the Mozilla web browser. The product uses ML algorithms to categorise facial features and infer emotions by using computer vision to place 61 points on the human face, and compare these to data sets that reside in the cloud.

Speaking on potential use cases, Chakaveh explains: “This could bring virtual assistants like Siri into a whole new dimension, because it can detect emotions, lending itself to applications ranging from mental health to care homes. Someone even suggested a use case for pets. Why not use the app to be able to read the emotions of your cat or dog better? We only need to train our engines.”

“Why not use the app to be able to read the emotions of your cat or dog better? We only need to train our engines”

To serve advertisers and brands, there’s also Pixsellar Player, which is a live product placement and virtual advertising app. It is capable of personalising ad banners and live personal information into a multicast stream. 

“We initially worked on product replacement for mobile streams, but it is able to work on broadcast streams. It basically allows you to insert a unicast stream into multicast content and instantly personalises the material via AI tech,” Chakaveh explains.

Although all of the Pixsellar technologies work independently, they are designed to interface with external systems – and Chakaveh is currently in talks with several partners about this. 

This article is from the April 2019 issue of FEED magazine.