bluesky
Latest Stories
Show HN: WhatCable, a tiny menu bar app for inspecting USB-C cables
Show HN: WhatCable, a tiny menu bar app for inspecting USB-C cables
We Run iSCSI over the Internet
We Run iSCSI over the Internet
A new T-Mobile network for Christians aims to block porn and gender-related content
A new T-Mobile network for Christians aims to block porn and gender-related content
A new US-wide cell phone network marketed to Christians is set to launch next week. It blocks porn, which experts in network security say marks the first time a US cell plan has used network-level blocking for such content that can’t be turned off even by adult account owners. It’s also rolling out a filter…
How People ask Claude for personal guidance
How People ask Claude for personal guidance
If I Could Make My Own GitHub
SoftBank, Intel to develop ZAM memory – new memory designed as lower-power HBM
SoftBank, Intel to develop ZAM memory – new memory designed as lower-power HBM
Your Biggest Vulnerability is your Shitty Compensation
Your Biggest Vulnerability is your Shitty Compensation
Meta's Big Tobacco PR Tactics
Meta's Big Tobacco PR Tactics
C8s: A Confidential Kubernetes Architecture
C8s: A Confidential Kubernetes Architecture
Apple Says Mac Studio and Mac Mini Will Be in Short Supply for Months
Apple Says Mac Studio and Mac Mini Will Be in Short Supply for Months
Softmax, can you derive the Jacobian? And should you care?
Softmax, can you derive the Jacobian? And should you care?
Agentic Harness Engineering
Agentic Harness Engineering
Time is a construct but it can still break your software
Time is a construct but it can still break your software
Ryan welcomes  Jason Williams, senior software engineer at Bloomberg and  the creator of Rust-based JavaScript engine Boa, to the show to dive into why date and time handling in JavaScript is so difficult and how the Temporal proposal aims to fix it.
Roboticist-Turned-Teacher Built a Life-Size Replica of Eniac
Roboticist-Turned-Teacher Built a Life-Size Replica of Eniac
DeepTutor: Towards Agentic Personalized Tutoring
DeepTutor: Towards Agentic Personalized Tutoring
arXiv:2604.26962v1 Announce Type: new Abstract: Education represents one of the most promising real-world applications for Large Language Models (LLMs). However, conventional tutoring systems rely on static pre-training knowledge that lacks adaptation to individual learners, while existing RAG-augmented systems fall short in delivering personalized, guided feedback. To bridge this gap, we present DeepTutor, an agent-native open-source framework for personalized tutoring where every feature s...
Static Program Slicing Using Language Models With Dataflow-Aware Pretraining and Constrained Decoding
Static Program Slicing Using Language Models With Dataflow-Aware Pretraining and Constrained Decoding
arXiv:2604.26961v1 Announce Type: new Abstract: Static program slicing is a fundamental software engineering technique for isolating code relevant to specific variables. While recent learning-based approaches using language models (LMs) show promise in automating slice prediction, they suffer from inaccurate dependency modeling and unconstrained generation, where LMs fail to capture precise data flow relations and produce slices containing hallucinated tokens and statements. To address these...
LLM Biases
LLM Biases
arXiv:2604.26960v1 Announce Type: new Abstract: Transformer-based agentic AI is rapidly being deployed on major platforms to help users shop, watch, and navigate content with less effort. While these systems can deliver impressive performance, a key concern is whether they may be less reliable than they appear. We ask a simple but fundamental question: whether the mechanisms that make transformer-based agents effective can also induce systematic biases or distortions? We study this question ...
CareGuardAI: Context-Aware Multi-Agent Guardrails for Clinical Safety & Hallucination Mitigation in Patient-Facing LLMs
CareGuardAI: Context-Aware Multi-Agent Guardrails for Clinical Safety & Hallucination Mitigation in Patient-Facing LLMs
arXiv:2604.26959v1 Announce Type: new Abstract: Integrating large language models (LLMs) into patient-facing healthcare systems offers significant potential to improve access to medical information. However, ensuring clinical safety and factual reliability remains a critical challenge. In practice, AI-generated responses may be conditionally correct yet medically inappropriate, as models often fail to interpret patient context and tend to produce agreeable responses rather than challenge uns...
Designing Ethical Learning for Agentic AI: Toegye Yi Hwang's Ethical Emotion Regulation Framework
Designing Ethical Learning for Agentic AI: Toegye Yi Hwang's Ethical Emotion Regulation Framework
arXiv:2604.26958v1 Announce Type: new Abstract: Agentic AI systems capable of autonomous goal setting and proactive intervention introduce new challenges for regulating moral-emotional processes in learning environments. Existing frameworks typically treat emotion as reactive feedback or engagement optimization, overlooking the need for normative regulation across autonomous decision cycles.This paper proposes an ethical emotion regulation framework for agentic AI learning design inspired by...
Simulating Validity: Modal Decoupling in MLLM Generated Feedback on Science Drawings
Simulating Validity: Modal Decoupling in MLLM Generated Feedback on Science Drawings
arXiv:2604.26957v1 Announce Type: new Abstract: In science education, students frequently construct hand-drawn visual models of scientific phenomena. These drawings rely on a visual structure where information is encoded through visual objects, their attributes, and relationships. Multimodal large language models (MLLMs) are increasingly used to generate feedback on students' hand-drawn scientific models. However, the validity of such feedback depends on whether model claims are grounded in ...