Since I long ago stopped keeping up with the news in the computer field, a report in NewScientist concerning the imminent extinction of passwords caught me by surprise. This report from MIT Technology Review is somewhat more detailed:
During his talk at Google I/O, Daniel Kaufman, the head of the company’s ATAP (Advanced Technology and Projects) arm, casually mentioned the rollout of a new way of securing Android apps called Trust API. Rather than using standard passwords, Trust API will use biometrics like facial recognition, your typing pattern, even how you walk to help determine that you are who you say you are.
Each metric will contribute to an overall “trust score” that will let you unlock your apps. The program will run in the background of an Android phone, using the phone’s suite of sensors to continuously monitor the user’s behavior. If the trust score falls below a threshold, a user might be prompted for some form of additional authentication.
My first reaction was some surprise, even repulsion. But let’s break it down.
From the program’s1 side of things, authentication is always a statistical question: what are the odds that the identification and authorization code presented actually represent the person entering the code? In traditional settings, the information landscape is normally barren, because all the program has available are those two items of information. These can be augmented, of course, with more information, such as biometrics (think Family Guy’s penile scan), but in the end the program’s decision is binary: yes or no.
The description of the implementation of the Trust API is intriguing from the program’s side. Through constant monitoring of the environment, theoretically it can build a profile of the authorized user, and then one of the current user, and compare the two; the “trust score” mentioned in the article is then essentially the degree of agreement between the two profiles, now acting as a proxy for the statistical odds that the user is authorized to use the smartphone. The trust score may not be an ideal approach to the question of authentication because some programs may wish to give more weight to some biometrics than others, but at least we’re making progress on the authentication front.
Another facet not mentioned in the article is the possibility for a program to require higher or lower trust scores in order to unlock various features. It’s just a thought, given how variable people can be in their requirements.
As an engineer, I appreciate that the authentication question has been factored out (basically turned into just another resource) and been more richly explored by (I assume) a dedicated team. Speaking as an engineer, I know most engineers will look at a problem and see the core set of questions to solve (which either excites them or fills them with dread) – and then there’s the security questions, which are viewed as foreign interlopers, like bees with really big stingers, to be avoided, or solved as quickly as possible while regretting the damage it does to the code structure. Whether its authentication or licenses, it never really plays nice. This work by Google – if trustable – is one of those things that makes an engineer nod with pleasure.
From the user side, there’s a bit more trepidation, particularly for us older types who still have some distrust of technology; for the younger sorts, say, less than 25 years, this will be a godsend and they’ll run with it. But for us suspicious types, we have to wonder what’s going to happen the first time the trust score drops precipitously and we lose access? We’re not told the implementation strategy behind this program, but given the popularity of Big Data and Artificial Intelligence, it’s not unreasonable to speculate on algorithms from those two areas, possibly including neural networks, and one of those notorious facts associated with these areas is the inscrutability of the results, which is to say the result might be right, but often the scientists and engineers who wrote the algorithm do not understand in detail how the results are obtained – and what might be wrong if the result is wrong. If this does apply, how does one get around it?
With a password?
That said, it would be interesting to know how the use of computer systems varies both by type and by the age of the user. Do younger users modify their usage style such that the loss of access to the system is not particularly damaging, while us older types aren’t so smart about it? Or does the loss of access to a personal smartphone spark terror in everyone?
I suspect the answer is already out there, I’m just too lazy to go looking.
1 I mildly detest the pseudo-word “app”. It falls into the same classification as the phrase “leisure suit,” which I’ve already used once today.