How Can Openness in Research Adapt to Generative AI Tools?

Hi everyone,

I’ve been following the discussions around open science, open access, and open data, and I’m curious about how these values are evolving in the age of AI. With generative tools becoming more common in research..from writing support to data simulation..I’m wondering how we can ensure transparency and accountability when using these technologies.

Should there be a standard for disclosing when generative AI tools are used in producing academic content or analysis? And how do we preserve openness when some AI tools are developed by private companies with closed models?

On a related note, I recently explored a Generative AI tutorial that walks through how these models work and how they can be applied responsibly in academic research. Thought I’d share it here in case others are exploring this space.

I’d love to hear your thoughts on how openness movements can adapt to include or regulate AI in science without losing their core principles.

Looking forward to the discussion!

Best, N Sanders

1 Like