top of page

MIT Publishes Draft Guidelines for Responsible Use of Generative AI in Law




As the use of artificial intelligence (AI) becomes increasingly prevalent in various industries, the legal profession is no exception. AI's potential to streamline tasks and provide valuable insights makes it an invaluable tool for legal professionals. However, the integration of AI within the legal field raises important questions about its ethical use, necessitating robust guidelines. The Massachusetts Institute of Technology's Task Force on Responsible Use of Generative AI for Law is actively addressing this need. They have recently released a public draft outlining initial principles and guidelines to encourage the responsible use of AI within the legal profession. This draft serves as a platform for constructive dialogue among legal professionals globally, fostering a space to refine these principles further.



A Call to Legal Professionals


The Task Force's inception was driven by a perceived need to create principles and guidelines on factual accuracy, accurate sources, valid legal reasoning, alignment with professional ethics, due diligence, and the responsible use of Generative AI for law and legal processes.


Recent cases like Mata vs. Avianca Airlines. have underscored the urgency of formulating these guidelines, where an attorney presented a court with AI-generated citations. Understanding the implications of such incidents, the Task Force emphasises that all content produced by AI, particularly those intended for court submission, should undergo human review and approval for accuracy. They point out the new rule by the Northern District of Texas as an example, which mandates explicit confirmation of the accuracy of AI-drafted content before court submission. However, the Task Force acknowledges that these principles are in draft form and do not represent a final, definitive guide. Rather, these principles are a call to engage the wider legal community in an open conversation to refine and improve them. Your Voice Matters The Task Force welcomes feedback from legal practitioners on several key areas, including:

  • Best practices for data governance and information security in AI applications.

  • The completeness and accuracy of the existing duties identified in the draft principles.

  • The implications of these guidelines for legal malpractice insurance.

  • Existing policies, procedures, or guidelines concerning the use of generative AI within your firm or organisation.

  • Different jurisdiction-specific approaches to these issues, particularly outside the United States.

  • Awareness of other groups or task forces focusing on similar issues.

If you would like to provide feedback you can do so by hitting the button below.





The Draft Principles

The Task Force's draft principles include seven core duties when utilising AI applications in legal practice:

  1. Duty of Confidentiality

  2. Duty of Fiduciary Care

  3. Duty of Client Notice and Consent

  4. Duty of Competence

  5. Duty of Fiduciary Loyalty

  6. Duty of Regulatory Compliance

  7. Duty of Accountability and Supervision


Duty of Confidentiality


Example Breach: an ethical violation might occur if a lawyer shares a client's confidential information with a service provider who is permitted to share this information under their terms and conditions. Mitigation Strategy: to uphold this duty, lawyers should avoid sharing confidential information or ensure appropriate safeguards are in place, including acquiring the client's consent.


Duty of Fiduciary Care

Example Breach: not fact-checking or verifying citations from the outputs of generative AI. Using GAI for contract review or completion without regard for defensibility or accuracy. Mitigation Strategy: ensure diligence and prudence with respect to facts and law. Maintain existing practices for fact-checking.

Duty of Client Notice and Consent

Example Breach: The client would be surprised by and object to the way in which the attorney used GAI for their case. If the terms of an agreement with a client require disclosure of this type and the attorney fails to disclose. Mitigation Strategy: The terms of the client engagement include the use of technology and specifically address the responsible use of GAI. If needed, an amended engagement letter is agreed upon with the client.

Duty of Competence


Example Breach: Making use of GAI and accepting outputs as fact without understanding how the technology works and or critically reviewing how outputs are generated. Mitigation Strategy: Understanding and skilfully integrating generative AI with other relevant apps and tools in your workflow. Likewise, the currently required skill of adequately composing prompts to generate high-quality outputs that augment and improve upon existing human expertise.


Duty of Fiduciary Loyalty


Example Breach: Accepting at face value the output of GAI that contains recommendations or decisions contrary to the client's best interests, e.g. potentially prioritising the interests of a buyer when your client is the seller, or the employer when your client is the employee, etc. Mitigation Strategy: Critically review, confirm, or correct the output of generative AI to ensure the content accurately reflects and supports the interests and priorities of the client in the matter at hand, including as part of advocacy for the client.


Duty of Regulatory Compliance


Example Breach: Deploying GAI to employees and agents of your firm who practice in jurisdictions that have, for example, banned that technology. Mitigation Strategy: Analysing the relevant laws and regulations of each jurisdiction in which GAI is deployed in/by your firm and ensuring your compliance with such rules, e.g. by being able to turn off the tool for users in problematic jurisdictions.


Duty of Accountability and Supervision


Example Breach: Making use of GAI applications without adequate best practices and human oversight, evaluation, and accountability mechanisms in place. Mitigation Strategy: Any language drafted by GAI is checked for accuracy, using authoritative legal sources by an accountable human being before submission to a Court. Responsible parties decide on use cases/tasks that GAI can and cannot perform and sign off on use on a client/matter basis.


Conclusion


The aim is to modify these principles to tackle issues specific to generative AI's use in legal practice, taking into consideration existing professional rules and principles of conduct, such as the ABA Rules of Professional Conduct and the United Nations Principles for the Role of Lawyers. The Task Force is going to rely on feedback from the legal profession in response to these draft principles and it is only through open dialogue and collaborative efforts that we can ensure the integration of AI into the legal profession is done ethically, responsibly, and for the betterment of justice.





2 views0 comments
bottom of page