EcoViz+AI
  • About
  • People
  • Model Zoo
  • Communities of Practice
  • Case Studies

On this page

  • What is AI, anyway?

About

As ecological data and the threats facing ecosystems grow in size and complexity, ecologists increasingly turn to artificial intelligence (AI) for data processing, inference, and decision-making. However, with this acceleration and opportunity comes significant scientific, ethical, and practical considerations. We formed this community to discuss the practical barriers to AI implementation in ecological studies, specifically ecologists’ hesitation to use AI due to unclear relevance, opportunity costs, implementation burden, transparency deficit, and incentive shortages. During the workshop, we developed practical recommendations for AI tool development and witnessed firsthand the benefit of educational resources, communities of practice, visualization, cyberinfrastructure, and science communication to overcome practical barriers.

What is AI, anyway?

The diverse backgrounds, expertise, and perspectives of our group were reflected by our different definitions of the terms central to this paper - starting with the term AI (Fig 1). In ecology and science more broadly, AI, machine learning, and data science are now often used interchangeably. Strict definitions of AI may preclude machine learning and even deep learning or be restricted to AI systems that seemingly replicate human behavior or, minimally, enable flexible prediction making in light of sensory information. We define AI broadly to include the spectrum of machine learning and deep learning models available to ecologists (Fig 1). As in any sub-field, ecologists should use the methods that best align with their scientific question, rather than selecting for novelty or complexity.

Figure 1. A diagram describing our workshop participants’ definitions of AI, showing a spectrum of models from less interpretable and more complex models on the left (darker red values) to more interpretable and less complex on the right (lighter blue values). Icons next to a definition represent votes from workshop participants. The spectrum is divided into four categories: General AI (or strong AI), Deep Learning (DL including Large Language Models [LLM], Generative Adversarial Networks [GAN], Long Short-Term Memory Networks [LSTM], Recurrent Neural Networks [RNN], and Convolutional Neural Networks [CNN]), Machine Learning (encompassing DL models as well as Hidden Markov Models [HMM], Gradient Boosting Machines [GBM], Random Forest [RF], Support Vector Machines [SVM], and Decision Trees [DT]), and Statistics (including e.g. linear, logistic, and multivariate regressions, Principal Component Analysis [PCA], and Analysis of Variance [ANOVA]).