Secondly, if the GPS location you hail your lift from is a bar, you've likely already decided it was unsafe to drive – or you planned to drink and left your car at home.

Finally, a quick test of the accelerometer in your phone is sufficient to determine a level of sobriety. Swaying, excessive shaking or even the time between taps is a wealth of information that a programmer can use to their advantage. That's without even going as far as turning the microphone on to judge the noise level. Privacy concerns aside, a belligerent drunk that could harm a driver can be identified by spikes of shouting in an otherwise quiet environment.

The point is though, this isn't anything that requires a machine to learn the dataset and make a judgement call based upon previous users or even your own interactions with the app. It takes a few lines of code with the appropriate libraries at hand and a little common sense from the developer.

Sure, the method above would probably only work for a standard case, anything that deviates from the pattern of humanity is unlikely to be correctly identified as a potentially dangerous drunkard.

One could argue the potential for AI here is to train a dataset to improve the success rate, but at the end of the day an AI driven platform is only really going to give a confidence percentage of how likely the data fits the scenario and the programmer is still going to simplify that down to another IF statement (if confidence greater than 90%, for example).

The effort involved is likely to far outweigh the benefit and accuracy may only be increased by around 5%. Herein lies the problem, marketeers need to make their app sound exciting and relevant to today's flavour of the week – but don't necessarily understand what makes AI intelligent, or can't correctly identify the difference between a software feature or a learned behaviour.