Published on March 12, 2026 at 06:01 CET (UTC+1)
Show HN: s@: decentralized social networking over static sites (125 points by remywang)
The article introduces the sAT Protocol (s@), a decentralized social networking protocol that operates over static websites. Each user stores their encrypted data on their own static site, and client software in the browser aggregates feeds and publishes posts directly between users' sites without any central servers or relays. It is designed for small, private social circles and explicitly avoids the infrastructure and influencer-centric models of mainstream platforms.
Temporal: The 9-year journey to fix time in JavaScript (577 points by robpalmer)
This post details the lengthy, nine-year standardization effort to create the Temporal API, a modern date and time library for JavaScript. It explains the complex process of evolving JavaScript through the TC39 committee, highlighting Bloomberg's involvement and the challenges of achieving consensus among all browser vendors. The article positions Temporal as a critical fix for JavaScript's historically problematic Date object.
Many SWE-bench-Passing PRs would not be merged (178 points by mustaphah)
A research note from METR analyzes the real-world utility of AI coding agents evaluated on the SWE-bench benchmark. It finds that roughly half of the pull requests that pass the benchmark's tests would not actually be merged by human repository maintainers due to issues like code quality or incorrect solutions. This indicates that benchmark scores can overstate practical usefulness and highlights the need for human feedback loops in AI-assisted development.
Tested: How Many Times Can a DVD±RW Be Rewritten? Methodology and Results (83 points by giuliomagnifico)
This is a detailed, empirical test to determine the practical rewrite limits of DVD±RW discs. The author describes a rigorous methodology involving repeated writing of data sets to discs until failure, carefully tracking errors and wear. The results provide concrete data on the longevity of this physical storage medium, a topic with little publicly available systematic testing.
Iran-backed hackers claim wiper attack on medtech firm Stryker (57 points by 2bluesc)
Security journalist Brian Krebs reports on a claimed cyberattack by the Iran-linked hacktivist group Handala against medical technology giant Stryker. The group alleges a data-wiping attack on over 200,000 systems globally, forcing office closures, ostensibly in retaliation for a recent missile strike in Iran. The article assesses the claim and its potential impact on critical healthcare infrastructure.
Making WebAssembly a first-class language on the Web (467 points by mikece)
This Mozilla Hacks article argues that WebAssembly (Wasm) remains a "second-class citizen" on the web despite its significant technical advancements. It identifies the core problem as poor integration with the wider web platform (e.g., DOM access, events), leading to a subpar developer experience. The post calls for and outlines a path toward making Wasm a "first-class language" with seamless interoperability with JavaScript and web APIs.
Don't post generated/AI-edited comments. HN is for conversation between humans (3089 points by usefulposter)
This is not a standard article but a direct link to a specific section of the Hacker News guidelines. The highlighted section explicitly states that users should not post comments generated or significantly edited by AI. It reinforces that Hacker News is intended for human-to-human conversation, reflecting a community policy to preserve authentic discussion.
I was interviewed by an AI bot for a job (206 points by speckx)
A first-person account from a journalist at The Verge who underwent a job interview conducted entirely by an AI avatar. The author describes the unsettling, uncanny valley experience of trying to connect with and present themselves to a synthetic interviewer. It explores the human and emotional challenges of AI-mediated hiring processes, even as they become more technologically sophisticated.
About memory pressure, lock contention, and Data-oriented Design (24 points by vinhnx)
The author narrates a performance debugging story within the Matrix Rust SDK, identifying memory pressure and lock contention as the culprits behind a frozen UI. The solution involved applying Data-Oriented Design (DOD) principles, which focuses on efficient data layout and access patterns over object-oriented design. This approach resulted in dramatic performance improvements (98.7% faster execution, 7718.5% higher throughput), demonstrating DOD's relevance in high-level applications.
Show HN: A context-aware permission guard for Claude Code (65 points by schipperai)
This Show HN presents "nah," an open-source, context-aware permission guard for the Claude Code AI coding assistant. It addresses the limitation of simple allow/deny permission systems by analyzing the specific context of tool calls (like file deletions or git operations) against user-defined rules. The tool aims to provide finer-grained safety and prevent AI agents from taking dangerous actions while still being useful.
1. The Benchmarking Gap: From Synthetic Tests to Real-World Utility * Trend/Insight: There's a growing critical awareness that passing standardized AI benchmarks (like SWE-bench) does not equate to producing acceptable, production-ready work. The METR article shows a significant disconnect between benchmark success and human approval. * Why it matters: It challenges how AI capability is measured and marketed. Over-reliance on naive benchmark scores can lead to misplaced trust and unrealistic expectations about autonomous AI agents. * Implication: The next frontier for AI evaluation is integration into real human workflows with iterative feedback. Development will shift towards creating robust human-in-the-loop systems and benchmarks that measure "mergeability" or practical utility, not just test completion.
2. The Rise of AI Safety and Control Layers * Trend/Insight: As AI agents gain more autonomy and tool-use capabilities (like Claude Code), users are building intermediary control systems ("guards") to manage risk. Tools like "nah" represent a move beyond simple off-switches to context-aware, policy-driven safety layers. * Why it matters: This is a grassroots response to the inherent limitations of built-in, one-size-fits-all AI safety measures. It reflects a need for user-defined, auditable, and fine-grained permission systems, especially in domains like coding where actions have consequences. * Implication: We'll see an ecosystem of third-party "AI middleware" emerge—tools that sit between the user/prompt and the powerful model to enforce governance, security, and operational policies, similar to how API gateways function today.
3. Human-AI Interaction Faces the "Uncanny Valley" of Social Roles * Trend/Insight: The deployment of AI into socially complex roles (like interviewers) is causing friction. The Verge article highlights that technical functionality isn't enough; the human experience of interacting with an AI in a typically human-centric role can be unsettling and counterproductive. * Why it matters: For AI integration to be successful in sensitive areas (hiring, therapy, customer service), design must address psychosocial factors, not just task efficiency. Poor design can lead to user rejection and ethical concerns. * Implication: Increased focus on HCI (Human-Computer Interaction) research for AI, exploring better interaction paradigms (e.g., maybe text-only is better than an uncanny avatar). It also raises questions about when and where AI should simulate human roles versus creating new, explicitly non-human interaction formats.
4. Infrastructure Evolution to Support AI/ML Workloads * Trend/Insight: Parallel trends in core web technology (WebAssembly seeking first-class status) and low-level performance optimization (Data-Oriented Design) are driven by the demands of complex, compute-intensive applications, many of which are AI/ML-related. * Why it matters: The future of distributed AI on the edge (in browsers, on devices) depends on performant, portable, and secure execution environments. Wasm provides this portability, while techniques like DOD maximize hardware efficiency for data-heavy ML operations. * Implication: AI developers will increasingly need to understand systems-level performance and leverage these next-generation web and systems programming paradigms to build efficient, client-side AI applications.
5. Decentralization as a Counter-Narrative to Centralized AI * Trend/Insight: The development of protocols like sAT (decentralized social networking) represents a broader trend of building decentralized alternatives to centralized platforms, partly as a response to concerns about data control, algorithmic governance, and monopoly power—issues intensely relevant to the current AI landscape. * Why it matters: It highlights a growing desire for technical architectures that inherently limit the concentration of data and influence, which is a direct critique of the dominant model where a few companies control vast AI training datasets and model access. * Implication: We may see increased experimentation with decentralized or federated approaches to AI data sharing, model training, and inference, leveraging protocols that prioritize user ownership and peer-to-peer interaction over central servers.
6. AI as a Dual-Use Tool in Offensive Cybersecurity * Trend/Insight: While not explicitly about AI, the sophisticated, large-scale wiper attack claimed by state-backed hackers points to the modern threat landscape where AI can empower both defenders and attackers. AI can automate vulnerability discovery, tailor phishing, or optimize attack propagation, making threats more scalable and potent. * Why it matters: The AI/ML community's advancements in automation, pattern recognition, and agent behavior directly translate into more powerful cyber weapons. The attack on critical infrastructure (medtech) shows the high stakes. * Implication: There will be accelerating arms race in AI for cybersecurity. Developing defensive AI (detection, response) is crucial, but so is considering the ethical and security implications of openly publishing powerful AI capabilities that could be weaponized.
Analysis generated by deepseek-reasoner