Skip to content

Lawyers use of ChatGPT backfires

Discussion
  • OpenAI’s ChatGPT has impressed users with its incredible ability to write essays, speeches, do homework, crack computer code and pen poems - but it’s also prone to providing information that is incorrect.

    https://news.sky.com/story/lawyers-fined-after-citing-bogus-cases-from-chatgpt-research-12908318

  • You’d think that for such a flagrant breach of the law from the very same institution designed to uphold it would attract a much harsher punishment than a USD 5,000 fine.

    I’d say this is the equivalent of lying in court - perjury in fact, and you’d think that same institution would impose greater sanctions, such as revoking their license to practice law - particularly as they stood by the complete diatribe of false information they presented.

    A complete breakdown in terms of fact checking, and I’m surprised this case wasn’t thrown out as a result.

    I’m not a lawyer by any stretch of the imagination, but this truly begs belief.

  • lol just 5000? But on top of lying, this is stupidity… incompetence of the law firm…

    maybe you remember, previously I was mentioning about this “fake” references before on this forum, when I asked about my own field to chatgpt and asked for some reading suggestions, it gave me completely made up scientific article names that looks very legit, and I was very surprised how I could not know these articles… later, it turns out, it is imitating article names and sounds legit, however, those references do not exist.

    if I use the court decision in “crazycells v. Sudonix” case as example, I would at least search for the case and read the decision 1 time…

  • @crazycells said in Lawyers use of ChatGPT backfires:

    lol just 5000? But on top of lying, this is stupidity… incompetence of the law firm…

    I know - begs belief.

    @crazycells said in Lawyers use of ChatGPT backfires:

    maybe you remember, previously I was mentioning about this “fake” references before on this forum, when I asked about my own field to chatgpt and asked for some reading suggestions, it gave me completely made up scientific article names that looks very legit, and I was very surprised how I could not know these articles… later, it turns out, it is imitating article names and sounds legit, however, those references do not exist.

    Yes, I remember this exact discussion.

  • @phenomlab said in Lawyers use of ChatGPT backfires:

    Yes, I remember this exact discussion.

    So, not following discussions on sudonix will cost this law firm 5000 and credibility issues…

  • @crazycells said in Lawyers use of ChatGPT backfires:

    So, not following discussions on sudonix will cost this law firm 5000 and credibility issues…

    🙂 it certainly seems that way.

  • @phenomlab by the way, I started to use Bard as well… it seems that it is useful as chatgpt but for different purposes.

    unlike chatgpt, bard can give me correct references in the field and even provide me direct PubMed links… however, bard gives me very short summaries and focuses on the correct and current info… chatgpt remembers what I say better, bard does not like long writings or does not like fantasies as chatgpt does 😄

  • @crazycells Thanks. I’m going to be looking at Bard myself. I’ve personally identified some serious flaws with ChatGPT - for example, giving it deliberately vulnerable code and asking it to make changes based on security requirements. It responded “well”, but failed to spot a simple SQL injection vulnerability I’d added in a PHP function - and it also failed to suggest that I changed my script to use PDO rather than literal strings (which is what would allow the SQL to be injected in the first place).

    I never expected ChatGPT to be “perfect”, but this is a glaring omission in my view as there are thousands of articles on the web concerning remediating this specific issue.

    It just makes me shudder - the thought of those with no fundamental experience of what secure code looks like adopting suggestions from ChatGPT which are inherently insecure and by default, easily exploited.

  • @phenomlab yeah, that is right. I agree but rather than codes, I am making ChatGPT to help me with emails, letters etc. Very helpful to draft a letter, although I do not directly use any of the stuff it gives me, I save a lot of time (and the best part is no mental exhaustion during this initial process) thanks to it.
    Of course, it might be a little different for you, since you are a native speaker.

  • @crazycells yes, I think it certainly has a place - but to enrich knowledge, rather than simply substitute it.

    I remember years ago when I did my exams. You weren’t allowed a calculator or anything like that and had to show your workings on a separate piece of paper which you were given additional marks for.

    These days, they use iPads etc in schools, so the art of writing a letter or needing to perform mathematical calculations in your head is gone. One of my very first jobs was in a newsagent who had a really old till (yes, not a Point-Of-Service like you have today) - all this till did was add up the individual figures, but didn’t tell you how much change to give - you had to do that part.

    Sounds simple enough, but with technology doing everything for us these days, our basic skills (think the “Three R’s”, and see example below) have taken a back seat and I think that’s made us lazy.

    https://www.merriam-webster.com/dictionary/three R's

    Again, my point here being to enrich - not completely replace basic skills we learn as we age.