Sunday, 11 December 2016

Facebook employs client-side ranking for improved efficiency

Facebook employs client-side ranking for improved efficiency

 

Facebook's core business surrounds the News Feed and the content it delivers to users, so the social giant is making it more efficient through employing client-side ranking.
When it comes to seeing content – you want it fast – and so do your users if they're going to stick around. You may not have control of their internet speeds, but that doesn't prevent you from taking measures which ensure your users have content readily available to stop them from staring at loading screens.
Facebook has taken several measures recently to speed up the experience for their users such as 'Instant Articles', a publishing format the company says can load up to 10x faster than standard, and the debut of 'Lite' apps which require less data but provide a simpler experience for users in emerging markets.

The most recent trick prioritises content in the news feed through a technique known as 'client-side ranking' to ensure the content which is pre-cached is at the beginning of your feed ready to go and subsequent items begin loading in the background.
"We redesigned the architecture of News Feed to allow stories to be re-ranked on the client after being sent from the server. We avoid spinners and grey boxes by 1) requiring stories to have all necessary media available before rendering them in News Feed and 2) being able to optimise the content in News Feed for each session," wrote Facebook in a blog post.

The subsequent content is prioritised based on what you're most likely to find most interesting. Facebook says that each time you scroll past a post then it will recalculate your News Feed to know what to load from both new stories on the server end and unseen ones from your cache.
All of the calculations are performed on the client-side (the user's device) to reduce how much data is sent to Facebook's servers. Facebook says its ideology on the matter is that a fast network should be considered an enhancement rather than a requirement.

"This architecture also enables us to surface stories that have been optimised for your connection at the time of your session. For example, slow-loading content gets temporarily down-ranked while it loads because, before we show a story in your News Feed, we check to see whether the media in the story — the image, the video, the link preview, etc. — has been loaded on your device. If it hasn't, we re-rank the stories on the client and prioritise those that have fully-loaded media."
Facebook's latest update is sure to be welcome by its users and it could be worth considering if a similar architecture could benefit your own.

 

Qt launches lightweight IoT development framework

Qt launches lightweight IoT development framework

 

The Qt Company, an open source tools provider, has launched a new project that aims to make software development faster and more lightweight.
The new Qt Lite Project aims to offer a wide range of enhancements to developers in order to streamline the creation and delivery of software and devices for all relevant platforms regardless of its size. The project is built into the company’s existing framework and can help streamline the development of software and devices for many industries including, automotive, avionics, healthcare, home appliance and entertainment sectors.

Qt Lite will allow developers to start with a minimal deployable configuration to simply add in any additional feature they require while developing their project. This will give them complete control, with a continuous understanding of the consequences of their actions, and allow transparency of the development project throughout the team.

The Internet of Things (IoT) angle is noted by Qt. In the press materials, the company cited research from Gartner and MarketsandMarkets; 6.4 billion connected things are in use in 2016, while the connected device market is expected to grow from $157.05 billion this year to $661.74 billion by 2021, at a CAGR of 33.3%. Writing in a company blog post, Nils Christian Roscher-Nielsen explained: “As the requirements and the world of software development is changing, so does Qt.
“We believe in a future of great software and hardware, developed together, delivered quickly, and that you can have fun in the process. Embedded development should be just as simple as all other software development, and you should immediately see the result of your ideas running on your device.”

For the past 20 years, Qt has been used on a wide range of operating systems such as Linux, Microsoft and various other real time operating systems.

 

Uber launches API to help improve the experience for drivers

Uber launches API to help improve the experience for drivers

 

Uber would be nothing without its drivers, at least until it rolls out its driverless capabilities as self-driving vehicle technology improves. To ensure their drivers are being supported wherever possible, Uber has launched a new API which enables developers to help improve their experience.
“More than 1.5 million people across the globe drive on the Uber platform,” explained Chris Saad, Head of the Uber Developer Platform. “With demand for flexible, on-demand work on the rise, we also see an extraordinary opportunity for developers. By leveraging driver profile data, trip data, earnings, and more, you can create new apps and services that make driving with Uber more productive and fun.”
To example how the Driver API can be utilised, Uber enlisted the help of several innovative apps and services including Jobcase, Sears, Sherpashare, Stride, and Activehours.
Jobcase brings together a community of more than 50 million people pursuing new career opportunities and now allows users to share their Uber experience and rating on their profile using the Driver API. 
Sears is using the Driver API to reward Uber drivers with Shop Your Way loyalty points for their completed trips which can be used for things such as new appliances at Sears, a new outfit at Kmart, or a tune-up at a Sears Auto Center.
Sherpashare wants to improve the driving experience and help maximise earnings through intelligent route recommendations to pick up more people en route.
Stride helps driver-partners reach their income goals by helping with financial management and maximising take-home pay.
Activehours builds on Instant Pay by giving even more drivers immediate access to the money that they’ve earned.
"With demand for flexible, on-demand work on the rise, we also see an extraordinary opportunity for developers. By leveraging driver profile data, trip data, earnings, and more, developers can create new apps and services that make driving with Uber more productive and fun," explains Saad.
This latest addition for developers helps Uber in the fight against increasing competition from rivals such as Lyft and ensures the platform retains driver loyalty. You can find out more information about the Driver API here.

 

MasterCard launches blockchain APIs for developers

MasterCard launches blockchain APIs for developers

 

Financial service giant MasterCard has launched three blockchain APIs on its developer portal in a bid to keep up with VISA.
The concept of blockchain first arrived when bitcoins – the digital currency – started to gain transaction and required a public ledger for all transactions in a secure environment away from potential tampering. In order to facilitate this, blockchain is decentralised and spread across the network with each block representing a record and containing both a timestamp and link to a previous block which forms a link intended to help with verification.
Over 40 top financial institutions have begun experimenting with distributed ledger technology in a bid to help secure and track ownership of assets in the digital realm; which should help to speed up transactions and lower costs while also reducing fraud concerns.
Mastercard, for its part, claims its blockchain "facilitates new commerce opportunities for the digital transfer of value by allowing businesses and financial institutions to transact on a distributed ledger."
Featured in the 'new and experimental' section of its developer site, MasterCard's dedicated blockchain APIs enable developers to get started with the financial giant's foray into the area.
Three APIs are now available:
  • BLOCKCHAIN CORE API – Run your own blockchain nodes, define your own transaction types, and manage your participation in a blockchain network.
  • SMART CONTRACTS API – Write custom scripts using Mastercard’s Smart Contract language for use in your custom blockchain applications.
  • FAST NETWORK API – Helps with understanding instantaneous net position, doing real-time reconciliation, executing settlement, and generating custom reports.
Visa recently announced its partnership with blockchain-specialists Chain to develop a near real-time funds transfer system for high-value bank-to-bank and corporate payments.
“Our technology can power multiple use cases and can help take time, cost and risk out of financial flows,” MasterCard wrote on its website.
Blockchain has tried and proven use cases in the financial sector but companies in other industries are looking to harness its benefits as new challenges in data management and security are being faced. The biggest obstacle to widespread enterprise adoption of blockchain technology is the need to get the network of participants to agree on a common network protocol and technology stack.

 

Big Viking Games raises $21.75m, continues to bet on HTML5 mobile gaming

Big Viking Games raises $21.75m, continues to bet on HTML5 mobile gaming

 

Big Viking Games, a Canada-based independent mobile and self-published game studio and pioneer in HTML5 mobile games, has secured a total of $21.75 million (£17.4m) in funding from Royal Bank of Canada (RBC), Export Development Canada (EDC) and BDC Capital.
The gaming house is looking to secure more financing aiming to raise at least another $60m – with the belief that investing in HTML5 will help them expand the gap between themselves and the competition.
Since inception, the company has seen high profitability with 55% CAGR on revenues and 180% on EBITDA. The five-year old company has grown organically without any venture capital or outside equity investment to date.
Albert Lai, co-founder and CEO of Big Viking Games, said: “Starting in 2012, we made the decision to make significant investments in HTML5 instant games because we saw the potential of the technology and how it will define the future of mobile gaming and entertainment,” says Albert Lai, co-founder and CEO, Big Viking Games. “Others have moved away from HTML5 due to the technical investment required but we believe that open standards and more powerful devices will pave the way for a massive shift on mobile phones and tablets.”
Lai added: “While our focus on HTML5 has paid off with our Triple-A instant games that can be distributed on many powerful mobile platforms, such as messenger applications, we weren’t sure we could find the right investors that understood our vision in the early days. Now that we have multiple million dollar budget titles in the works to distribute on new and upcoming platforms, investors can see how our business strategy is on track to change the future.”
RBC, in partnership with EDC, provided Big Viking with $18m in financing, while BDC contributed $3.75m in an earlier round that has since been repaid with company profits.

 

Incorporating the latest speech tech into your UX

Incorporating the latest speech tech into your UX

 

Nobody used keyboards in the sci-fi of our childhoods. Whether it was the control system of starships or the hub of a utopian world, every interaction was based on human speech. Opening the pod doors or jettisoning the trash only required a simple command, and many of those systems replied in kind.
Now we’re closing in on that reality. Siri was the first seismic shift in the field, but companies like Google and Amazon have gone much further. Apple has even announced a massive rollout of voice-activated apps into the App Store. With high rollers like Uber, Runkeeper, and Skype all taking up the mantle of voice recognition, this tech is no longer a niche development — it is swiftly transforming into a necessity for app developers hoping to keep up with the competition.

Alexa Upped Our Speech Technology Game

With Amazon Echo, a device without screens or digital input mechanisms, we need only say, “Alexa,” and our wish is her command.
Alexa started out similarly to Siri. She could accomplish a limited set of small tasks when prompted by a human user. But Amazon opened the platform to developers around the world, and the Echo device’s capabilities grew.
It wasn’t just one company building a product from the ground up; instead, Amazon tapped into the wider technology community to build numerous command lines through the shared knowledge of the neural net.
And it isn’t just Amazon that’s making waves with this technology. Google is helping redefine human voice detection, too.

Google’s Voice Access Breakthrough

By analysing dialects, accents, sentence structure, and vocal inflexion, Google is working toward a more precise understanding of human commands. This research will allow programs to differentiate between when a user is asking a question versus making a statement.
This is a huge step in the right direction. Commercial speech recognition has improved by 30 percent over the past few years, but breaking the accent barrier will unleash a new wave of improvements.
The current incarnation of Google’s Voice Access already gives users the ability to control their phones with words instead of actions. But once Google’s research comes to fruition, the real work will begin tying it into voice and intent recognition.

How Designers Can Incorporate Speech Technology Into UX

UX designers need to start considering the consequences of these developments. On-screen displays today function side by side with limited voice recognition, but as networking begins to grow and integrate across multiple IoT devices, users will need a simple, speech-based UX.
So how can developers ensure their apps’ UX takes full advantage of this technology’s potential? There are three key ideas to adopt:
1. Consider the total experience. UX today is a primarily visual experience – but with the incorporation of speech technology – it will become an aural experience, too. Developers need to adjust their approaches accordingly. They can’t simply focus on the laying out of links or buttons; they need to think about the entire journey when somebody interacts with the software.
2. Provide audible cues. There’s nothing more frustrating for a user than confusion, and often voice-based systems leave users dumbfounded over whether their voice commands were recognised in the first place. Don’t fall into this trap. Provide audible cues to users so they know their commands were registered and understood.
3. Provide visual cues. Sometimes users won’t want or understand an audible cue – they might be shouting into their phones in a busy bar, or they might be whispering in a library. When working with visuals as well as audio, visual cues of understanding are very important, especially when there’s a series of questions to be answered. Users need to know that the first entry has been understood and that the system is basing subsequent questions on that first interaction.
The latest breakthroughs in speech technology have the potential to make our sci-fi childhoods a reality. And developers have a big role to play in unleashing our inner geeks. We can make people’s lives easier and their day-to-day tasks faster. Just don’t go build a HAL 9000.

 

Can A.I. write a Hollywood film?

Can A.I. write a Hollywood film?

A.I. filmmaking poses some fundamental questions about when and how a machine could develop independent and... 

 

Over recent years, we've seen artificial intelligence systems designed to write software, compose music, paint works of art, and even pen news articles, but the machines have been notably quiet in the medium of fiction storytelling. Designing an A.I. system that can write the screenplay for a movie, or compose a great novel, has posed a big challenge for researchers. So just how close are we to having machines pen our blockbuster films?

Opening credits

In June, a bizarre short film entitled Sunspring premiered. The film starred Thomas Middleditch (Silicon Valley) and chronicled the cryptic love triangle between three people inhabiting a strange futuristic office. Filled with incoherent non-sequiturs and inexplicably surreal tangents, the film could be considered either a compelling dream-like fugue or an amateurish mess.
In actuality, this odd film is something much more interesting. Sunspring is the first completely A.I. penned short film, developed in a collaboration between filmmaker Oscar Sharp and A.I. researcher Ross Goodwin. Appropriating a general text-recognition A.I. algorithm, Goodwin fed his system scores of science fiction screenplays from the 1980s and 1990s. Using films like Ghostbusters, Bladerunner and every episode of TV's The X-Files as its inspiration, the machine learned how to communicate in screenplay format and composed Sunspring.
The resulting film turned out to be stiflingly incoherent with literally no clear structure, but it revealed several fascinating recurring cinematic tropes that the A.I. seemingly recognized across numerous screenplays. Characters are constantly exclaiming confusion, with "I don't know what you're talking about," becoming a chorus-like refrain, while Thomas Middleditch's character shouts "it's not a dream" recalling countless reality-bending sci-fi stories.
The clear takeaway from Sunspring is that we're a long way from making a decent, or even coherent, A.I. penned movie. While A.I. systems have been developed to generate impressive musical pieces or works of visual art, it seems that fiction storytelling is a much more complicated beast. With its delicately complex combination of narrative, character, dialogue and structure, it seems like one of the critical barriers in developing a true form of artificial intelligence. A.I. pioneer Danny Hillis summed it up when he said, "The key thing that will make [artificial intelligence] work and make it acceptable to society is story telling."
A website called CuratedAI recently launched with the intention of acting as a repository for A.I. generated poetry and prose. Site founder Karmel Allison, a San Francisco-based software engineer, created a neural network algorithm designed to compose original machine-written pieces of poetry. Named Deep Gimble I, the algorithm has been loaded with a vocabulary of 190,000 words and its work is currently featured on the website. The poetry and prose the A.I. generates is definitely verbally discordant, but it does compellingly mimic the cadence and rhythm of classic work in its medium.
Earlier in 2016, a novel written almost entirely by an A.I. system passed the first selection round in a Japanese National Literary competition. Titled The Day A Computer Writes A Novel, this meta-tale had its human overseers direct the plot and characters while the A.I. generated the actual sentences. One judge described how the novel's ultimate shortcoming lay with its character descriptions, but the overall result suggested that a degree of human oversight or participation could make this kind of A.I. written fiction actually work.

The full feature

Impossible Things is the first attempt at an A.I. generated work of feature film storytelling and embraces the idea that having an human hand working with the machine is necessary. Mathematician Jack Zhang spent five years creating an A.I. that analyzed thousands of horror film plot summaries alongside their corresponding box office results. The idea is that the system can create a series of plot points that reflect popular audience tastes by crunching a serious amount of data. Understanding the limitations of the technology, a human writer was recruited to take the A.I. generated premise and plot, and add structure, dialogue and character.
The subsequent screenplay, Impossible Things, is now waiting on Kickstarter funding to move into production, but Zhang and his team have released a short summary of the A.I. co-written film and it predictably reads like an epic conglomeration of every horror movie cliche imaginable. The teaser trailer created to support the Kickstarter campaign, which was also written by the A.I. suggesting key things that would appeal to the film's ideal audience demographic, gives an appropriate indication of how unoriginal the results this kind of data-mined A.I. system can be.

Every single idea, shot, and audio trick can be traced back to prior successful films in the genre. Creepy kid singing a nursery rhyme? Check! Sound of a door creaking? Check! Ghost-like figure walking with a bloody knife? Check! It plays close to a parody of horror film tropes and in no way displays the whip-smart reflexivity that filmmakers like Quentin Tarantino deploy when they experiment with similar classic genre tactics.

Original sins

The unsettling idea raised by this form of A.I. generated filmmaking is that the technology can only ultimately feed back ideas we have enjoyed in the past rather than creating fresh, novel and meaningful new juxtapositions. Judging by the content of most modern big-budget Hollywood cinema, this method of generating content, by replicating past success, is already a path many movies already tread – Star Wars: The Force Awakens, I'm looking at you!
Netflix is the power-player with this form of data-driven content generation, and while it doesn't have A.I. systems literally creating its product, it does tailor all of its in-house productions to identified audience habits. Netflix has a treasure trove of data at its fingertips and is able to understand the viewing habits of its audience in ways no media producer ever has before. Netflix not only knows how quickly you binge through a series, but it can track the minute you stop watching an episode, if or when you come back to that show, and how you navigate through content in its library. This data allows it to mould its own productions to their audience's preferences.
Back in 2012, before the debut season of its first original series aired, executives were transparent about how they were using mined data to conceive original content. Steve Swasey, VP of Corporate Communications said to GigaOm at the time, "We can look at consumer data and see what the appeal is for the director, for the stars and for similar dramas".
This strategy has obviously been successful. Regardless of the creative or critical success in Netflix's productions, the audience demographics of each project has been incredibly clear. Its most recent series, Stranger Things, is a prime example. One can so visibly see how an algorithm could suggest that a show with those parameters would be successful. Do you like 80s movies and have a nostalgic connection to them? Are you a fan of Winona Ryder or Stephen King novels? The series, while creatively only mildly successful, is a supremely well-executed mash-up of current hipster nostalgic obsessions from Steven Spielberg's Amblin films (E.T, The Goonies, Close Encounters of the Third Kind), to its synth-heavy score reminiscent of John Carpenter horror films (which are referenced frequently) and its retro typeface credits that could be ripped from the cover of a Stephen King novel.
 
Not in 4,500 years?
 
So where does that leave us in the world of A.I. generated storytelling, particularly in the realm of film and television?
Film and television are still such complex creative mediums with long gestation periods from pre to post production and a large assemblage of persons involved across that production process. As we saw with a fully computer-generated screenplay in Sunspring, A.I. currently has little to no understanding of the nuances in character development and lacks the ability to build a coherent, meaningful narrative structure. At the other end of the spectrum, with productions such as Impossible Things, we simply seem to get A.I. assisted, data-mined compendiums of cliches blindly mashing up ideas that worked in prior financially successful films. Netflix also surfs that line of data driven content production, and while it has had its own volume of creative hits and misses, we still can't shake the feeling that this mode of production is frustratingly uncreative.
In eccentric filmmaker Werner Herzog's latest documentary Lo and Behold: Reveries of the Connected World, Stanford A.I. researcher Sebastian Thrun mentions to Herzog the inevitability of a machine at some point being able to make a film as good, if not better, than Herzog. The 73 year-old, self-professed technology luddite, bristles at the statement, replying, "Absolutely not!" In interviews Herzog has reiterated stoically that not in 4,500 years could a machine make a film better than he. The statement is steeped in classic Herzogian arrogance, but it does allude to something fundamentally human present in the process of creating meaningful fiction.
A.I. may be currently able to offer reasonably interesting simulacrums of original content in other artistic mediums from music to poetry, but it seems that fiction storytelling is a tougher mountain to climb for machine mimicry. It's one thing to use data mining as a way to generate a fictional narrative, but in longer, more immersive forms of media such as film and television, it becomes stiflingly apparent if a work is either mindlessly incoherent or discouragingly derivative.
A great film speaks to its audience and offers a perspective on the human condition in ways that are often abstract or allegorical. We can be given a unique insight into the world through someone else's experience and this results in a narrative generating its meaning and impact in ways that often cannot be quantified.
No matter how much data a machine can crunch, will it ever be able to offer us a meaningful and effecting perspective on humanity? Can it generate insights into our experiences in this world that are new or profound and then communicate those ideas through a fictional narrative?
These are the hurdles A.I. developers currently face, and while it probably won't take 4,500 years to overcome these barriers, they certainly pose some fundamental questions about when and how a machine could develop independent and creative thought. In the meantime, we can just console ourselves with the latest Hollywood reboot, remake or sequel, and realize that machines are already pretty well represented in Hollywood.