Digging into digital: efficiencies and ethics

The Environmental Impact Assessment (EIA) Community recently hosted a discussion in collaboration with the Chartered Institute for Archaeologists (CIfA). The event brought together professionals working across EIA specialisms and archaeology to explore the opportunities and challenges that Artificial Intelligence (AI) and digital tools present for the EIA and heritage professions. Discussions explored the environmental and ethical implications of the increasing use of AI and digital tools, how to manage these risks effectively through good practice, and the potential benefits that these tools can bring to the sector. 

This is the second joint event held between the IES and CIfA – the last explored the often under-considered greenhouse gas emissions related to development.

The event featured lightning provocations from the following speakers, which were recorded and can be viewed at the bottom of this page:

  • Claire Marchetti, Carbon and Sustainability Manager at Historic England.
  • Laura O’Connor, Senior Digital Innovation Programme Officer at Historic Environment Scotland and Chair of CIfA’s Information Management Group.
  • Alistair Walker, Technical Director of the EIA team at Lanpro.

The presentations were followed by an interactive discussion, which is summarised below.


What is the potential for AI and digital tools to support knowledge exchange across disciplines?

Technical knowledge and language can be incredibly specific to its field, and fields are often subject to different legislation and guidance. Therefore, even those in adjacent fields may not have a good underpinning knowledge and may not appreciate the nuance of information without specific context and background.

As a result, AI can be helpful in the initial stages of knowledge exchange by supporting development of non-technical summaries for different audiences. It can quickly provide summaries and explanations free from technical jargon, which can be tailored to ‘speak the language’ of the user rather than the original creator. This application is useful if there is limited knowledge of the work being done by another team on a collaborative project.

Data visualisations can also be generated quickly by AI, even when using complex datasets, making data more accessible to multidisciplinary teams. AI can also be a useful tool in drawing connections between fields that may not be easily identifiable without having expertise in each.

How can we ensure people are kept up to date with advances in AI and best practice whilst still maintaining existing skills? Is there a risk that outsourcing tasks to AI will result in skill loss?

It will be necessary to upskill the current workforce in AI use whilst maintaining the skills that currently underpin competence. AI should always be seen as a tool to support professionals, and not as a replacement for competent professionals with the necessary skills to analyse, interpret and act upon data and information – AI use cannot replace professional judgement. A human-centric approach is needed when choosing how and when to use AI, and to provide quality assurance of its outputs. 

Shared learning will be necessary in making the most out of AI and embracing the opportunities it presents. However, it is important that lessons are learned when it is used ineffectively so that boundaries can be established between what is currently a ‘human skill’ versus something that an AI can competently achieve. 

All organisations should develop or adopt ethical frameworks to support AI use. These should set out clear values, principles and aims to ensure ethical and effective use. All professionals should ensure these frameworks are built into workflows. 

It is also important to note that learning around AI is a constant process rather than a one-off, especially given the speed of AI and digital tool development. Professional bodies and workplaces should do more to support professionals to build skills in AI use as part of their CPD.

Human input will be needed for the foreseeable future to check that work completed by AI is correct, so professionals will still need to be equipped with the ‘traditional’ knowledge they currently possess, as well as the capacity to think critically about the inputs being used and outputs generated by AI. 

What are the key considerations when managing the use of AI across multiple models and products?

At the moment this is not straightforward as AI is still a relatively new industry. It can be difficult to identify the right model or how models may interact with one another. It is important to seek advice from AI and IT specialists where needed, and this is something that should be considered at the start of projects rather than when AI is already being used. Asking questions around how different AIs will collaborate is just as important as considering the dynamics between human team members. Additionally, understanding the limitations of AI products and applications is as crucial as understanding their strengths when deciding on appropriate use cases.

It is also essential to understand whether everyone using AI in a specific workstream is equally literate. If not, this is something that should be addressed early to ensure team members are upskilled and that AI is used appropriately and effectively.

What are the risks of attempting work with AI without relevant expertise? Are we ready for it?

Before anything else, it is crucial to identify values and principles around AI use to avoid incidents of AI hindering progress or producing unreliable or incorrect outputs. Organisations should also consider their staff’s capacity, as AI literacy cannot be achieved in a one-off training session and requires both resource and capacity to be embedded in work. This includes ensuring staff are aware of building effective prompts and understanding the risks and limitations of different AI tools. It is also essential to foster a culture of transprency around AI use, with extra checks and balances to be certain that AI-generated outputs are of a high standard.

Across the sector, practitioners must work together to learn about AI and collectively look to AI professionals when barriers are identified.

Is there reputational risk from embracing AI? 

If organisations are clear how, where and why AI is being used, and are transparent about this, these risks can be mitigated. This relies on careful thought before use, and structured reasoning as to when and why AI is required. Developing policies around AI use is a good way to provide guidance for staff as well as reassurance for external stakeholders. Policies are also an effective way to demonstrate the consideration that has gone into the use of various AI tools.

Being clear about when AI has been used and what it has been used for is essential for maintaining trust with stakeholders and managing reputational risk. Ensuring competent professionals are involved throughout the process and have reviewed any AI-related content is a vital part of this process. 

How do we acknowledge the environmental impacts of AI while still promoting its use?

Ultimately, there is a balance to be found between using AI well and not using it when it is unnecessary: it is a tool for specific tasks, not something to be relied upon at all times. Sharing learning is important when developing use cases so that time and resources are not wasted using AI tools ineffectively.

Further detail is also needed on the emissions and resource use associated with digital and AI tools, and to account for this within sustainability policies and practice. Currently EIAs do not account for the emissions related to the EIA process itself, but this is an important consideration within a wider projects environmental and carbon impacts.

Awareness of the high energy and water consumption associated with AI must be embedded in decision making around its use, and it is something that can be linked into organisational environmental strategies and ethical frameworks for use.

What are the knowledge gaps regarding AI use? How can they be filled?

Addressing these questions starts at at an organisational level, and an understanding of the existing competencies of staff is essential. It must be clear who within an organisation should be leading on AI use, as well as when and how they will share guidance with other staff. This will support consistency in use and knowledge across an organisation. If organisations are building their own AI models, it is also necessary to be clear on who is responsible for the model and overseeing the data used and generated. If externally created models are being used, there is still a need for oversight of the data inputs and outputs to ensure the model is effective, and exploration of privacy and data security.

Not everyone can be an expert in AI, but they do need to be clear on how it can and should be used, and associated risks of use. Frameworks for use, alongside consistent training and signposting to key knowledge sources should be a key consideration of employees.

Alongside structured, consistent approaches to AI use, organisations should seek to provide opportunities for learning through experimentation and creativity. Though automation and efficiency may represent ‘easy wins’ when embedding AI tools, additional applications may only be identified by thinking outside the box and encouraging a working understanding of tools and models.

How can AI and digital support better use of data? What are the barriers?

The construction of digital libraries can be achieved much more quickly with AI, which can also be used to categorise and analyse data. However, siloed and non-standardised data sources can be an issue in building comprehensive datasets. This can be exacerbated through the use of different terminology, both within and across fields. Data cleaning, sourcing and responsible input is essential to feeding models useful and useable data. 

Additionally, data sharing and interoperability needs to be normalised across the sector, as there is currently resistance to information and knowledge sharing, driven by commercial interest. The more high-quality data is available to AI models, the more useful the outputs will become for all stakeholders. Centralised and accessible environmental datasets could be transformative for environmental practice and support improved environmental outcomes. 


Get involved

Our EIA Community provides a forum for thought-provoking, critical conversations around EIA from a science-based perspective. Those involved in any part of the EIA process are encouraged to join the community, along with those working in planning, engineering, construction, policy and regulation. 

If you’re an IES member but not yet part of the EIA Community, ensure it is selected in the ‘Manage My Communities’ section of your Member Portal. If you’re not yet a member, consider joining us to receive full access to Community outputs. You can also:


Further information

Header image credit: © Lightfield Studios | Adobe Stock