top of page
  • Eamon Downey

Insights from OSU Analytics Day



The annual OSU Analytics Day has always been a beacon of knowledge for professionals in data science and AI. This year, our very own Data Analyst, Evan Jordan, attended the summit, bringing back valuable insights from some of the leading minds in the industry. Below, we explore key takeaways from sessions that span practical applications to ethical considerations in AI deployment and usage.

 

Model Deployment Mastery: Expert Insights from Brian Griffin of Paycom

Brian Griffin from Paycom detailed the meticulous process that follows the training of a model during his session at OSU Analytics Day. His discussion revolved around ensuring that a model remains efficient and effective after its initial development, focusing on several critical areas:

 

  • Data Source and Delivery  Brian emphasized the importance of determining the origins of new data—whether it will be a continuous stream or received in batches—and how the model outputs will be delivered to users, such as via dashboards or web applications.

  • Preparation for Deployment He outlined essential preparation steps including:

    • CI/CD Pipelines Establishing continuous integration and continuous deployment pipelines to enable automated and reliable code changes.

    • Version Control Implementing robust version control systems to manage changes and maintain stability across the model’s lifecycle.

    • Regular Health Checks Performing systematic checks to ensure the model's continuous performance and reliability.

  • Critical Documentation Brian stressed the need for thorough documentation which is crucial for:

    • Understanding Model Assumptions Documenting assumptions made during the model’s development helps other stakeholders understand the model's functionality and limitations.

    • Comprehensive Process Overview Recording every detail from data cleaning, preparation steps, model selection, to limitations and assumptions, ensuring transparency and replicability.

  • Monitoring and Retraining He also touched on the dynamics of model drift:

    • Types of Drift Identifying changes in the distribution of key variables or shifts in relationships among features which might be instigated by changes in business operations.

    • Retraining Protocols Setting up retraining schedules or triggers based on significant shifts in accuracy metrics to maintain model accuracy and relevance over time. For these tasks, Paycom utilizes tools like Evidently AI to monitor and manage model performance effectively.

 

"Brian Griffin's session on model deployment offered a deep dive into the lifecycle of a model post-training. His emphasis on rigorous preparation and continuous evaluation illustrates the care and detail needed to sustain model efficacy in real-world applications."

 — Evan Jordan

 

Harnessing AI Responsibly: Beau Rollins on LLMs at Devon Energy

During the OSU Analytics Day, Beau Rollins from Devon Energy provided a profound look into how Devon is incorporating Large Language Models (LLMs) into their data science workflows. The session covered the nuances of AI implementation with an emphasis on responsible usage:

 

  • Analogy of Tools and Craftsmen Beau used a compelling analogy, comparing calculators in mathematics to LLMs in writing, coding, and text analysis. He stressed that while LLMs are powerful tools, they should not be mistaken for the craftsmen; the real value comes from how these tools are used by the prompters.

  • Responsibility of Prompters It is crucial for users to remember that the responsibility for the outputs lies with them, not the LLMs. Beau emphasized that the prompter or the business using the LLM must rigorously scrutinize outputs to ensure accuracy and appropriateness.

  • Zero Trust Policy on LLM Outputs Devon implements a zero-trust policy regarding the outputs from LLMs. Users must thoroughly understand the domain in question to assess whether the LLM might be producing erroneous or 'hallucinated' data.

  • Effective Prompting Strategies Beau provided practical tips on how to effectively utilize LLMs:

    • Persona Setting Assign a persona to the LLM, like a marketing analyst, to tailor the content generation for specific industry targets.

    • Clear and Detailed Instructions The clarity of the prompt significantly impacts the utility of the output. Users should specify exactly what they expect from the LLM, whether it's a table, a JSON object, or another specific format.

    • Contextual Data Supplying relevant data and examples can greatly enhance the relevance and accuracy of the LLM’s output by informing it of the context needed to generate appropriate responses.


"Beau Rollins’ insights into the disciplined use of LLMs at Devon underscored the importance of the human element in guiding these advanced tools. His approach not only enhances the output quality but also safeguards the ethical use of technology."

— Evan Jordan

 

Conclusion: Integrating Ethical AI Practices at Bear Cognition

 

The insights from OSU Analytics Day really hit home for us here at Bear Cognition. It’s clear that experts like Brian Griffin and Beau Rollins are leading the way in showing how to do AI the right way, and we’re all ears. We're putting their advice into practice every day, not just to boost the smarts of our AI tools but also to keep our ethical game strong. By weaving these principles into our work, we make sure our tech does good while it does well, staying true to our commitment to responsibility and integrity.

 

 


 

Comments


bottom of page