It’s Artificial Intelligence (AI) day! And Tom O’Connor, who I just did the monthly EDRM case law webinar with, provides some real intelligence in his follow up blog post on AI, pointing to his more detailed blog post published on the Digital WarRoom site. And, he gets the last laugh, at least in his choice of graphics!
A few weeks ago, Tom’s post on his Techno Gumbo blog (Is the New NIST Standard for AI Looking at the Wrong End of the Horse?) discussed the trustworthiness of AI. And I got a lot of comments about my choice of graphics to go with that theme (which some assumed, at least playfully) that it was a commentary on Tom himself. Far from it – just having a little fun with the analogy. 🙂
When it comes to the trustworthiness of AI, Tom nailed it back then when he said, “part of the problem is that people aren’t really sure how these programs work”, which illustrates what a difficult subject it is.
His follow-up post (AI Redux or The Other End of the Horse) references that earlier post and whether it was “a little harsh or over the top”? Tom’s response: “Nah. I was being nice.” But he also notes that his “old friend Bill Gallivan, CEO and co-founder with his brother Dan of Digital WarRoom, asked me if could narrow my focus a bit and write something about AI with the thesis that legal AI requires specificity of scope and the scope of current applications is NOT very wide.”
Which Tom did, and there’s a link to that post on the DWR site within Tom’s Techno Gumbo post, where he references two other articles with similar viewpoints and discusses those articles. From an AI standpoint, Tom concludes that “AI should be an acronym for attorney intelligence” – “Otherwise (as Kevin Scott of Microsoft pointed out in one of the referenced articles), we’re just left with ‘piles of math.’”
As usual, Tom provides a lot of great information, and he even gets to have the last laugh in terms of the picture he used on his TG blog post (which I have borrowed here). At least it made me laugh out loud! 😀
So, what do you think? How can we get attorneys and other legal professionals to be more intelligent about AI? Please share any comments you might have or if you’d like to know more about a particular topic.
Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Hmm.. I commented on his original post but the comment never showed up. So I’ll summarize my point here:
(a) Of course AI as a term is hype. No disagreements.
(b) The question that one should be asking, though, is “but what does it do for me anyway, no matter what it is called?”.
(c) The way to figure out what it does for you is to vet. To run simulations. To test it out not by running a search or two, but by letting it do its entire schtick on the data from a completed matter, thereby counterfactually determining what it would have done, had you used it.
(d) Almost no one in the market does (c). Almost no one is willing to do (c).
It’s one thing to complain about the overhyped nature of a name. And I get it; I’m similarly repulsed by the hype. It’s another thing to actually do something about it, by being willing to subject your own efforts to scrutiny, by comparing (vetting) various technologies on your own data.
Until the latter happens (vetting) the former (depuffing the hype) has no bite. Even if the depuffing is fully justified.
In short, the way to get legal professionals to think more intelligently about AI is to get them to understand correct methods for vetting.. and then to get them to do it.