Using AI to Determine the Political Point of View of Books

In this blog post, we discuss experimenting with AI, specifically ChatGPT, to determine the political leanings of books, hoping to maintain diversity in library collections without contributing to book bans. We share our initial test with both distinctly political and neutral books to see if the AI can accurately identify their perspectives, and the results are encouraging. Moving forward, we plan to challenge the AI with newer and less straightforward titles, exploring how this technology can help us keep our libraries inclusive and welcoming.

More "Racism in AI Models" News, Unfortunately.

We review new research showing that large language models (LLMs), like GPT-4, exhibit biases against dialects and names associated with certain racial groups. One study from the Allen Institute for AI found that these models have a prejudice against speakers of African-American English, potentially impacting decisions in areas like HR, criminal justice, and finance. Additionally, a Bloomberg report demonstrated a similar bias in AI evaluations of resumes with names common to Black and Hispanic people, suggesting that despite efforts to curb bias in AI, it remains a significant issue rooted in the data they're trained on.

Canva's AI-based "Magic Design" Tool

At a recent AI library integration workshop, attendees were introduced to Canva's "Magic Design" tool, which auto-generates presentations based on a user's topic description, combining draft text and stock images. While not perfect, this feature offers a promising start for those looking for quick presentation drafts, and users can easily customize the generated slides to better fit their needs.