mastodon.online is one of the many independent Mastodon servers you can use to participate in the fediverse.
A newer server operated by the Mastodon gGmbH non-profit

Server stats:

10K
active users

Thrilled to announce that on April 21 at 2pm Central CBI Tomash Fellow, Harvard ABD Aaron Gluck-Thaler will be giving an online Tomash Fellow Lecture entitled " Pattern Recognition and Intelligence Reform in Cold War America." Please see link below for registration for this free/public event. #ai #aihistory #artificialintelligence #aipolicy #science #tech #history #sts #sociology #anthro

@histodons
@comm
@sociology
@anthropology
@politicalscience

cse.umn.edu/cbi/events/2025-to

Announcing
AITRAP,
The AI hype TRAcking Project

Here:
poritz.net/jonathan/aitrap/

What/why:
I keep a very random list of articles about AI, with a focus on hype, ethics, policy, teaching, IP law, some of the CS aspects, etc., now up to 1000s of entries.

I decided to share, in case anyone is interested; I'm thinking of people who like @emilymbender, @alex, & @davidgerard . If there is a desire, I'll add a UI to allow submission of new links, commentary, hashtags.

www.poritz.netAITRAP -- AI hype Tracking Project

🤖 AI content is here to stay and our flagship project, Liberato, has had a policy in place since 2023 to keep its platform safe, ethical, and creator-friendly.

Here’s how Liberato handles AI-generated content while protecting rights, privacy, and trust. 🧵

✅ AI-generated content rules:
🔹 Tagged/watermarked as AI-generated
🔹 Deepfakes of real people require consent (1/2)

think it’s right to put AI's potential in the hands of tech titans, when the tech was built on devouring what millions of us have posted on the internet? A few months ago Microsoft expressed its concerns about the anticipated misuse of AI. Last month Google dropped its policy that its AI could not be used for weapons development … [6 min. read] bryl.link/12e #AIpolicy

When The Seattle Times and The Associated Press partnered to investigate school surveillance, reporters inadvertently received access to almost 3,500 sensitive, unredacted student documents through a records request. The documents were stored without a password or firewall, and anyone with the link could read them:

apnews.com/article/ai-school-c

Zoe Reiland, 17, sits with her cat, Cracker, and talks about she and her younger brother being monitored at their previous schools, in Oklahoma, by surveillance technology, Monday, March 10, 2025, in Clinton, Miss.(AP Photo/Rogelio V. Solis)
AP News · AI surveillance on school Chromebooks has security issues, investigation showsBy Sharon Lurye

📢 Big news for AI in Education in Illinois!

Two key bills—SB1556 & HB2503—are moving through the legislature to ensure responsible AI use in schools. They focus on oversight, student safety, and AI literacy.

💡 We need your support!
✔️ Advocate with legislators 📞
✔️ Spread the word 📢
✔️ Endorse the bills 🏛️

Let’s make AI work for students & teachers! 🚀

🔗 Learn more:
SB1556: ilga.gov
HB2503: ilga.gov

In democratic societies, people trust state institutions more than private #AI developers. Meanwhile, in countries with corrupt governments, people trust algorithms and technology more than thieving bureaucrats.

:doi: doi.org/10.1007/s00146-025-022

SpringerLinkWEIRD? Institutions and consumers’ perceptions of artificial intelligence in 31 countries - AI & SOCIETYA survey of perceptions of Artificial Intelligence in 31 countries in 2023 (Ipsos in Global Views on A.I. 2023. https://www.ipsos.com/sites/default/files/ct/news/documents/2023-07/Ipsos%20Global%20AI%202023%20Report-WEB_0.pdf . Accessed 17 May 2024, 2023) yields significantly less positive perceptions of the new technology in developed western economies than in emerging and non-western economies. This could reflect citizens in non-Western countries perceiving machines (computers) and algorithms differently from those in Western countries, or that a more positive outlook in countries with weak democratic institutions comes from a preference for algorithmic precision over inconsistent and/or corrupt regulation and decision-making. However, it could also be reflecting the different psychology of “WEIRD” (Western, Educated, Industrialised, Rich, Democratic) countries. Regressing the survey responses against measures of the “WEIRD” dimensions, we find that reported understanding of, willingness to trust, and anticipation of change due to AI applications are consistently negatively correlated to a country’s education levels (E), and average income per capita (R). The sophistication of democratic institutions (D) and “Westernness” (W), both alone and in combination with the other factors, have statistically significant negative effects on the percentage of the respondents in any given country having positive perceptions of AI and its prospects. The consistency of the negative relationship between the sophistication of democratic institutions country-level perceptions of AI brings into question the role of regulation of the new technology. WEIRD societies are presumed to rely on democratic institutions for assurances they can transact safely with strangers. Institutions thus substitute for the trust non-WEIRD societies place in friends, family and close community contacts when transacting. Third-party (and notably government) assurances in the context of uncertainty created by the emergence of new AI technologies arguably condition perceptions of the safety of these technologies through the presence (or absence) of regulations governing their implementation and use. Different perceptions amongst European countries compared to other western counterparts to perceptions of data privacy support the contention that the mere presence of AI regulation may be sufficient to alter perceptions in WEIRD societies, regardless of whether the regulations are necessary or even effective in increasing user safety. This has implications for interpreting and responding to political pressure to regulate new technologies in WEIRD countries.