Hacker Newsnew | past | comments | ask | show | jobs | submit | pcoz's commentslogin

Most neural networks assume computation is instantaneous: an input arrives, a function runs, an output appears. Even with sequences, time is often modeled indirectly (via windowing, stacking, or recurrence), so the model still reacts rather than exists in time.

This project explores Temporal Neural Networks (TNNs): neurons as continuous-time dynamical systems with internal state and inertia. Instead of y = f(x), the network evolves via dV/dt = f(V, x), and predictions emerge through a settling process - not a single forward pass.

On clean data, TNNs typically match classical accuracy. The difference shows up under real-world stress: noise, missing samples, irregular streams. TNNs produce far fewer prediction flips and degrade more gracefully - stability comes from the computation itself, not post-hoc smoothing.


Are we overusing neural networks for time-series analysis?

Neural networks are incredible tools.

But in many real-world time-series problems, they’re simply overkill.

If your signal is driven by a handful of rhythms, trends, decays, or saturations, training a model with thousands (or millions) of parameters can be like using a jet engine to stir tea. You’ll get motion - but also compute cost, power draw, latency, and deployment pain.

That’s why I built and open-sourced Time Series Formula Finder which takes a different approach: • Find the simplest mathematical forms that explain the data • Decompose signals into layers of structure • Produce human-readable equations, not black boxes

Instead of “trust this model,” the output is: “Here’s the equation that explains your signal.”

Why formulas often beat NNs in practice 1⃣ Edge deployment becomes trivial

A formula runs in a few operations per sample. No GPU. No heavy runtime. Ideal for microcontrollers, industrial devices, and battery-powered sensors.

2⃣ Interpretability is the feature In engineering and operational domains, explanation matters. A formula exposes frequency, decay rates, drift, and saturation directly.

3⃣ Lower lifetime cost Neural networks invite retraining cycles, monitoring pipelines, drift detection, and version churn. A good equation often stays valid far longer.

4⃣ Debugging is possible If performance drops, you can ask which term stopped matching reality. With NNs, the answer is usually far less clear.

What the tool actually does is search for promising partial forms: expressions that explain part of a signal well, subtract them, then analyze the residual.

This produces a layered explanation rather than a monolithic model.

All feedback appreciated!


Built a small experiment around conditional compute by reasoning type.

This repo (1) classifies prompts into 6 reasoning types (weighting/consensus/deduction/comparison/causal/lookup) via simple patterns, then (2) runs GPT-2 Small and compares neuron activation patterns by type (overlap + “type-specific” neurons). On a toy set (48 prompts) the classifier is ~92% accurate, and some type pairs show low overlap.

Has potential applications in query-aware routing / MoE-style gating, serving-cost reduction (skip irrelevant compute for “lookup-ish” prompts), prompt triage (send hard cases to stronger models/tools), and interpretability (what subnetworks light up for what reasoning demands).


AI and The Art of Persuasion

***************

We built an AI script that models persuasion as navigation through two-dimensional space—advancing legal arguments while simultaneously managing emotional progression. The framework applies to any persuasive context: sales, negotiations, dispute resolution, even marketing.

Read how we translated abstract philosophy into practical AI that actually works: https://fleetingswallow.com/winning-small-claims-with-ai/

#AI #Persuasion #AIScripting


Traditional knowledge graphs fail when applied uniformly to mixed documentation types. Force a well-organized spec through the same extraction pipeline as chaotic Slack threads and you either over-process structured content or under-extract from conversations. AILang's Knowledge Amalgamator solves this by processing documents according to their inherent structure. Well-structured docs (Confluence, specs) get minimal internalization—just outlines and anchors. Why re-serialize what's already navigable? Loosely-structured sources (Slack, email) undergo heavy extraction of decisions, risks, and procedures buried in conversations. The system uses a Person-based memory architecture that mirrors human cognition: separate episodic, semantic, and procedural memory types with natural boundaries between them. The lightweight schema eliminates massive ML costs while enabling production-grade reliability. GitHub: https://github.com/pcoz/ailang/tree/main/examples/knowledge_...


It's really a prompt engineering thing. Prompt engineering allows you to play off different areas of the LLM's training data against each other to produce a useful result by returning the intersection between the two sets (knowledge areas) of training data. The make_bigger(document) effect occurs when you let the LLM freewheel.


I wrote this language to address the issue with letting AI into production environments, which is that the AI is unpredictable to a certain degree. The purpose of this language is to provide hard, defined constraints in which the AI's human-like processing is invoked. All feedback welcome! Thanks, Edward


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: