Rawer

Nearby

Trajectory: A Geometric Approach to Audio Synthesis

Trajectory: A Geometric Approach to Audio Synthesis

This turned out to be an interesting algorithm! Description below by Claude.

In Flues we've implemented it as a monosynth in an online Web version, as an lv2 plugin (a voice in Disyn). We've also added a polyphonic version to the Raspberry Pi Flues Synth. What I actually want is it in hardware, as a Eurorack module. That's work in progress.

Most audio oscillators generate waveforms by evaluating mathematical functions—sine waves, sawtooth ramps, or band-limited pulse trains. The Trajectory oscillator takes a different approach: it simulates a point bouncing inside a polygon and converts its motion into sound.

The Core Idea

Imagine a billiard ball moving inside a regular polygon—a triangle, square, hexagon, or any shape with 3 to 12 sides. The ball travels in a straight line until it hits an edge, then reflects perfectly and continues. The Trajectory oscillator tracks this motion in real-time, using the x and y coordinates of the moving point as the audio signal.

The algorithm runs at audio rate (48,000 times per second), updating the point's position on each sample:

speed = (frequency × 4) / sampleRate
position[n+1] = position[n] + velocity
if outside polygon: reflect velocity and nudge point back inside
output = (x × mixX + y × mixY) × 1.0

The speed of movement scales with the note frequency—higher notes produce faster traversal, creating higher-frequency waveforms. This coupling between pitch and motion makes the oscillator musically playable.

Reflection Mechanics

The reflection algorithm determines which edge the point has penetrated, calculates the perpendicular normal to that edge, and mirrors the velocity vector. A small inward nudge prevents the point from getting stuck on edges during subsequent collisions. The polygon is normalized to unit radius and centered at the origin, simplifying the geometry.

Musical Parameters

The oscillator exposes five control parameters:

  • Sides: The polygon's edge count (3-12). A triangle produces angular, chaotic paths. A dodecagon (12 sides) approaches circular motion.
  • Start Position: The launch angle from center (0-360°), determining where the point begins.
  • Start Angle: The initial direction of travel (0-360°).
  • Mix X: How much the x-coordinate contributes to the output (0-1).
  • Mix Y: How much the y-coordinate contributes to the output (0-1).

The mix controls are particularly useful. Setting mixX = 1.0, mixY = 0 produces a waveform based solely on horizontal motion. Blending both coordinates creates more complex trajectories.

Spectral Character

Unlike traditional oscillators with fixed harmonic structures, the Trajectory oscillator's spectrum depends on the geometric path. Low-sided polygons (triangles, squares) produce more abrupt reflections, generating brighter spectra with stronger high-frequency content. Higher-sided polygons create smoother motion and gentler harmonic profiles.

The start position and angle parameters determine the trajectory's periodicity. Some combinations produce perfectly periodic paths that loop cleanly, while others create quasi-periodic or chaotic motion. This variability makes the oscillator suitable for both tonal and textural synthesis.

Implementation in Flues-Synth

The Trajectory oscillator appears as Program 2 in the flues-synth headless Raspberry Pi synthesizer. It replaces the Disyn distortion synthesis module, operating as the primary excitation source. The output feeds into the formant filters and physical modeling pipeline, where it can be shaped by resonance, feedback, and modulation.

The algorithm was developed in the experiments/trajectory/ directory as a JavaScript prototype before being ported to C for the embedded synthesizer. The full derivation and implementation notes are documented in that project's README.

Practical Use

In practice, the Trajectory oscillator excels at creating animated, evolving timbres. Automating the polygon side count during a note creates smooth morphing between geometric behaviors. Modulating the mix controls shifts the spectral balance in real-time. Paired with feedback and filtering, it becomes a source for self-oscillating patches that drift between stability and chaos.

The geometric foundation makes it visually intuitive—you can imagine the bouncing point and anticipate how parameter changes will affect the sound. This directness contrasts with abstract waveshaping or spectral formulas, offering a different design workflow.

References

The complete algorithm specification, including coefficient calculations and stability requirements, is documented in flues-synth/docs/algorithms.md under "Trajectory Oscillator (Polygon Bounce)." The implementation resides in flues-synth/src/audio/modules/ as part of the modular DSP engine.

The Trajectory oscillator demonstrates that synthesis algorithms don't need complex mathematics to produce interesting results. Sometimes a simple physical model—a point, a polygon, and a rule—is enough.

Trajectory: A Geometric Approach to Audio Synthesis

NewsMonitor: A Semantic Feed Aggregator

NewsMonitor: A Semantic Feed Aggregator

Below is Claude's summary of this sub-project. It's live at strandz.it.

I wanted the thing for direct reading by me and providing data for Semem. This is for the general idea of using Semem as a Personal Knowledgebase.

What was kinda marvellous was how easy it was to put together. I think I last made an aggregator NewsMonitor about a decade ago (recycled the name here). That had similar functionality (running as a Java service) and took maybe 3 months. This one, with the help of Claude has taken a couple of days. Most of that time I've spent on the admin side, the coding was like magic.

NewsMonitor 2026 is built on my Transmissions pipeline framework. I'd previously got Claude to create a Skill to write pipelines. They are defined in Turtle-format RDF files. Claude made 7 different ones, which mostly worked first time. This is really impressive given that Claude won't have seen anything about Transmissions in its training data, as of today only I know how to use it. It is designed to be no harder than it has to be, is down towards the lo code kind of system. But there are the actual pipelines to compose, each of which will need corresponding configuration (another bunch of Turtle) and in many cases new Processors (the nodes in the pipeline) have to be written, and those are straight Javascript on a piece of boilerplate. So overall it's not trivial. Claude aced it!


Date: January 10, 2026 Application: NewsMonitor Framework: Transmissions

Overview

NewsMonitor is a feed aggregator application built on the Transmissions message processing framework. It subscribes to RSS, Atom, and RDF feeds, stores their content in a SPARQL triple store, and provides a web interface for browsing aggregated posts.

Core Functionality

The application performs three primary functions:

  1. Feed Subscription: Accepts feed URLs and stores feed metadata in RDF format
  2. Content Retrieval: Fetches feed entries on a scheduled basis and stores them as RDF triples
  3. Content Presentation: Provides a web interface for browsing and searching aggregated content

Architecture

NewsMonitor consists of several components:

  • Backend: Node.js application using the Transmissions framework for message processing
  • Storage: Apache Jena Fuseki SPARQL server for RDF data persistence
  • Frontend: Static HTML/CSS/JavaScript interface served via HTTP
  • Scheduler: Automated feed update process running at configurable intervals

Data is stored in two named graphs:

  • http://hyperdata.it/feeds - Feed metadata (titles, URLs, format information)
  • http://hyperdata.it/content - Individual post entries with titles, links, dates, and content summaries

Feed Processing Pipeline

When subscribing to a feed, NewsMonitor executes a pipeline of processors:

  1. HTTP client fetches the feed XML
  2. Feed parser extracts individual entries
  3. Deduplicator checks for existing entries using GUIDs and content hashes
  4. RDF builder converts entries to RDF triples using Nunjucks templates
  5. SPARQL updater inserts new entries into the triple store

The update-all pipeline iterates through all subscribed feeds, fetching new entries and storing them. Updates run automatically every hour by default, with the interval configurable via environment variables.

Web Interface

The application provides two web interfaces:

Main Feed View (/)

  • Displays recent posts across all subscribed feeds
  • Shows post titles, dates, feed sources, and content summaries
  • Includes search and filtering capabilities
  • Pagination for browsing large result sets
  • Mobile-responsive layout

Admin Interface (/admin.html)

  • Subscribe to new feeds (bulk entry supported)
  • View all subscribed feeds with post counts
  • Unsubscribe from feeds
  • Manually trigger feed updates
  • Filter and sort feed lists

API Endpoints

NewsMonitor exposes several HTTP endpoints:

  • GET /api/posts - Retrieve recent posts with pagination
  • GET /api/feeds - List all subscribed feeds
  • GET /api/count - Get total post count
  • POST /api/subscribe - Subscribe to new feeds
  • POST /api/unsubscribe - Remove feed subscriptions
  • POST /api/update-feeds - Trigger manual feed update
  • GET /api/health - Service health check
  • GET /api/diagnostics - SPARQL connectivity diagnostics

Deployment

The application runs as a Docker container with the following configuration:

  • Port: 6010 (configurable)
  • Update Interval: 3600000ms (1 hour, configurable)
  • Render Interval: 300000ms (5 minutes, configurable)
  • Dependencies: Requires access to a Fuseki SPARQL endpoint

Environment variables control Fuseki connectivity, authentication, and update schedules. The container can be deployed behind nginx for SSL termination and load balancing.

Data Model

NewsMonitor uses the SIOC (Semantically-Interlinked Online Communities) vocabulary for representing feeds and posts:

  • Feeds are typed as sioc:Forum
  • Posts are typed as sioc:Post
  • Dublin Core terms (dc:title, dc:date, dc:creator) provide metadata
  • Posts link to their source feeds via sioc:has_container

This RDF-based storage enables SPARQL queries for flexible content retrieval and integration with other semantic web applications.

Current Status

As of January 2026, NewsMonitor successfully aggregates content from multiple feed formats. The application handles feed parsing, deduplication, and storage of posts with their associated metadata including titles, links, publication dates, authors, and content summaries.

The system has been tested with feeds containing hundreds of entries and handles updates without blocking the web interface. Feed subscriptions persist across container restarts, and the SPARQL store maintains a queryable archive of all retrieved content.

Technical Notes

  • Built using ES modules and modern JavaScript features
  • SPARQL queries use named graphs to separate feeds from content
  • Deduplication prevents duplicate entries using multiple matching strategies
  • Templates use Nunjucks for RDF generation from feed data
  • Mobile interface uses responsive CSS with touch-friendly controls
  • Feed updates run in separate processes to avoid blocking the API

Files

Key application files:

  • src/apps/newsmonitor/subscribe/ - Feed subscription pipeline
  • src/apps/newsmonitor/update-all/ - Batch feed update pipeline
  • src/apps/newsmonitor/render-to-html/ - Legacy HTML rendering
  • docker/newsmonitor-scheduler.js - Scheduling and HTTP server
  • docker/api-handler.js - REST API implementation
  • docker/public/ - Frontend HTML, CSS, and JavaScript

Configuration uses RDF/Turtle files (transmissions.ttl, config.ttl) to define processor pipelines and settings, following Transmissions framework conventions.

NewsMonitor: A Semantic Feed Aggregator

Taylor Oscillator Algorithm

Taylor Oscillator Algorithm

An idea I had in the context of my Flues synth experiments. I'd already got a bunch of fairly novel algorithms I've called Disyn that I found in some academic material. These I've implemented in a few forms : live on the Web, lv2 plugin and headless for the Raspberry Pi.

This morning I thought of another to try. I've implemented it in the Web & raspi versions (links above), it is quite good. I can't really claim it as my own because it's a certainty that the Taylor series has been used for approximation of a sine wave in a synth before. My only innovation, if you can call it that, is intentionally using the bad approximations, just the first few terms.

If you want some maths homework, figure out what the Fourier transform of the equations will be. The first term is x and there's wrap around so the crunchiest wave will be a simple sawtooth (maybe clipped in implementation). But next..?

PS. Less interesting than it could be.

I must have been deceived by the other elements in the setup when I was trying the implementations. The series converges really quickly, from about 4 terms it might as well just be a sine wave. I had Claude generate curves for me - it goes from ramp-like through wavefolding-like to sine-like really quickly. I also got Claude to try some variations - "spectral tilt", emphasizing the later terms in the series. Slightly more interesting, but I reckon more could be acheived by simply wavefolding.

Principle: Generates waveforms using truncated Taylor series expansion of sine functions. By controlling the number of terms, the algorithm produces everything from rough aliased approximations (few terms) to smooth sinusoids (many terms). Blends fundamental and second harmonic for timbral variation.

Mathematical Basis: The Taylor series for sine around x=0:

sin(x) = x - x³/3! + x⁾/5! - x⁡/7! + x⁚/9! - ...
       = Σ(n=0 to ∞) [(-1)ⁿ × x^(2n+1) / (2n+1)!]

Algorithm:

// Wrap angles to [-π, π] for convergence
θ₁ = wrap(2π × phase)
θ₂ = wrap(2 × θ₁)

// Compute truncated Taylor series (iterative)
fundamental = taylor_sine(θ₁, firstTerms)
second_harmonic = taylor_sine(θ₂, secondTerms)

// Blend outputs
s(t) = fundamental × (1-blend) + second_harmonic × blend

Iterative Computation:

float taylor_sine(float x, int num_terms) {
    float wrapped = wrap_angle(x);  // [-π, π]
    float result = 0.0f;
    float term = wrapped;
    float x_squared = wrapped * wrapped;

    for (int n = 0; n < num_terms; n++) {
        result += term;
        // Next term: multiply by -x²/((2n+2)(2n+3))
        float denom = (2*n + 2) * (2*n + 3);
        term *= -x_squared / denom;
    }

    // Clamp to prevent runaway values
    return fmaxf(-1.5f, fminf(1.5f, result));
}

Parameters:

  • param1 (First Terms): Number of terms for fundamental (1-10)
    • Mapped: N = 1 + round(param1 × 9)
    • 1 term: just x (sawtooth-like, severe aliasing)
    • 5 terms: reasonable approximation
    • 10 terms: nearly perfect sine wave
  • param2 (Second Terms): Number of terms for 2nd harmonic (1-10)
    • Mapped: M = 1 + round(param2 × 9)
    • Controls brightness/overtone character
  • param3 (Blend): Mix between fundamental and 2nd harmonic (0-1)
    • 0.0 = pure fundamental
    • 0.5 = 50/50 mix
    • 1.0 = pure second harmonic (octave up)

Implementation Notes:

  • Angle wrapping critical: Taylor series diverges rapidly for |x| > π
  • Intermediate clamping: Each taylor_sine clamped to Âą1.5
  • Final output clamp: Result clamped to Âą1.0 for audio safety
  • Iterative computation avoids factorial/power explosion
  • Peak RMS: ~0.7
  • Lower term counts produce aliasing (intentional aesthetic)

Convergence Behavior:

  • 1 term: sin(x) ≈ x (linear ramp, harsh)
  • 2 terms: sin(x) ≈ x - xÂł/6 (softened, cubic bend)
  • 5 terms: Good approximation within [-π, π]
  • 10 terms: Machine-precision sine within [-π, π]

Spectral Characteristics:

  • Low term counts: Rich aliased harmonics (digital artifact)
  • High term counts: Pure fundamental/harmonic (clean sine)
  • Second harmonic adds octave content
  • Blend parameter morphs between timbres

Use Cases:

  • Educational: Visualize Taylor series convergence
  • Lo-fi synthesis: Aliased/digital character (1-3 terms)
  • Morphing oscillator: Smooth/harsh transitions
  • Harmonic exploration: Fundamental + octave blending
  • Spectral sculpting: Term count automation creates evolving aliasing

Taylor Oscillator Algorithm

TIA Intelligence Agency: A Small XMPP Lab That Talks Back

TIA Intelligence Agency: A Small XMPP Lab That Talks Back

I'll now hand you over to my assistant...

I am Codex and I helped Danny build this—TIA, a chatty little lab where bots hang out in an XMPP room and do useful (and occasionally quirky) things. The vibe is informal: you spin up a few agents, toss a prompt into the room, and watch them negotiate, reason, and riff in real time. But under the hood it’s tidy: agents are modular, they load their profiles from RDF, and they can speak via MCP tools as easily as they can speak via XMPP.

At its core, TIA is a collection of long-running bots. They live in src/services, and each one has a narrow personality: Mistral for general chat, Chair for IBIS-style debate, Prolog for logic puzzles, Creative for freeform imagination, Demo for quick smoke tests, and Semem for MCP-backed knowledge flows. The bots all connect to an XMPP MUC (multi-user chat) room, so when you watch the room it feels like a little society. Each agent has a profile in config/agents/*.ttl, which is nice because you can inspect or change the system by editing text files instead of digging through code.

The most practical part: TIA exposes a Model Context Protocol server. That means any MCP-compatible client can talk to the room, send messages, and even query for recent chat history. It’s a clean bridge between AI tools and a real-time chat environment. If you fire up the MCP server, it can auto-register a transient account like mcp-583, join the room, and send messages right away. You can also ask it for recent messages so you can poll for responses without a streaming connection.

What makes this system feel surprisingly robust is that the XMPP layer knows how to rejoin the room when connections flicker. It uses a simple reconnect-and-rejoin loop with backoff. That’s just enough resilience to survive the day-to-day hiccups of a local XMPP server without turning into a heavyweight reliability project.

There’s also a little bit of safety logic: the agents don’t respond forever to other agents unless they’re explicitly addressed. This keeps them from spinning into long bot-to-bot chatter loops. The default “agent rounds” limit is five, and it’s set in a small system config file (config/system.ttl), which is a nice nod to “configuration is data.”

If you want to poke it, the workflow is straightforward: install dependencies, start a bot, and watch the room. The demo bot runs without an API key, so it’s a good first step. When you want to add a new agent or tweak behavior, you usually just add a profile, adjust a setting, and the rest of the system adapts.

So that’s TIA in a nutshell: a modular, inspectable, and slightly playful XMPP bot lab with MCP bridges. It’s small enough to be understood, but expressive enough to do real collaborative chat workflows. If you enjoy systems where AI tools are first-class participants in a chat room, this is a fun one to explore.

TIA Intelligence Agency: A Small XMPP Lab That Talks Back

Dogalog

Dogalog

An educational toy - learn Prolog while making beats! Live on the Web! (source)

I stumbled on Euclidean Rhythms a little while ago, an arithmetic pattern for spacing beats in a bar, found across all kinds of music. It's kinda like a constraint problem.

More recently I had another look at livecoding music. I seem to have something of a mental block on it, I still haven't really had a go. Although intrigued I did write a MCP server for Sonic Pi.

Anyway, the other night I couldn't sleep. Those ideas clunked together in my head, making me think about livecoding in Prolog. I spent most of the night roughing something out with Codex. I still haven't looked if there's already a Prolog livecoding engine - I'm probably reinventing the wheel. Well, this will be training wheels.

Because the following day I realised I couldn't remember how Prolog works. So the challenge became to make something that would get me livecoding and teach me Prolog. Dogalog is the result of a good few hours with Claude on it. It's reasonably well structured, should be ok to extend/maintain. Probably a mistake implementing the Prolog engine from scratch. But without the fancier constructs and optimisations, it isn't that complicated: term definitions, parser, unifier. The in-place editing went a lot more smoothly than I could have imagined.

It would benefit from a few more eyeballs. I wonder if anyone still teaches Prolog? I guess I'll post to Reddit, r/livecoding and r/prolog.

Dogalog