I admit to a certain amount of amusement about this software, new to me, made by Cellebrite, as analyzed and published on Signal:
Cellebrite makes software to automate physically extracting and indexing data from mobile devices. They exist within the grey – where enterprise branding joins together with the larcenous to be called “digital intelligence.” Their customer list has included authoritarian regimes in Belarus, Russia, Venezuela, and China; death squads in Bangladesh; military juntas in Myanmar; and those seeking to abuse and oppress in Turkey, UAE, and elsewhere. A few months ago, they announced that they added Signal support to their software.
Their products have often been linked to the persecution of imprisoned journalists and activists around the world, but less has been written about what their software actually does or how it works.
Wait for it …
Anyone familiar with software security will immediately recognize that the primary task of Cellebrite’s software is to parse “untrusted” data from a wide variety of formats as used by many different apps. That is to say, the data Cellebrite’s software needs to extract and display is ultimately generated and controlled by the apps on the device, not a “trusted” source, so Cellebrite can’t make any assumptions about the “correctness” of the formatted data it is receiving. This is the space in which virtually all security vulnerabilities originate.
Since almost all of Cellebrite’s code exists to parse untrusted input that could be formatted in an unexpected way to exploit memory corruption or other vulnerabilities in the parsing software, one might expect Cellebrite to have been extremely cautious. Looking at both UFED and Physical Analyzer, though, we were surprised to find that very little care seems to have been given to Cellebrite’s own software security. Industry-standard exploit mitigation defenses are missing, and many opportunities for exploitation are present.
As just one example (unrelated to what follows), their software bundles FFmpeg DLLs that were built in 2012 and have not been updated since then. There have been over a hundred security updates in that time, none of which have been applied.
Oh! That’s a burn. It’s beginning to taste like a company that concerns itself with nothing but making money, doesn’t it?
And that’s worth considering. At the highest ethical standard, a company marketing a software package should endeavour to deliver software with no security holes, now shouldn’t it? Oh, sure, given today’s primitive technology, security holes are a given – but the ethical marketers and developers should make the least little effort, right?
But, then, ethical developers shouldn’t be developing software for the “we’ll use it to torture our journalists” market either, wouldn’t you say? People signing on for that work … probably are not all that ethical.
So, in the end, it’s all not much of a surprise. Low class software because doing it properly requires a mindset that would refuse to do it in the first place.
Interesting how that all works out.