Native code intelligence as a single binary. Deep code understanding locally via CLI and MCP — semantic search, call graphs, impact analysis, architecture mapping. Zero LLM costs, zero latency.
Hi, I'm
Lisa Welsch
Founder. Developer. AI Nerd.
Passionate about building things that matter — from AI-powered dev tools and bioinformatics platforms to games and mobile apps. Based in Vienna, Austria.
~ whoami
Lisa Welsch — Founder & CEO @ TasteHub
~ cat interests.txt
LLMs, context compression, efficient code,
gaming, 3D printing, VR
~ uptime
lives in the shell since 2005
~ cat /etc/motto
if it compiles, ship it
~ ls ~/projects | wc -l
12 — and counting
~ echo $LOCATION
Vienna, Austria (UTC+1)
~ cat coffee.log | tail -1
Refill #4... it's only 10am
~| // projects
What I've Built
Code Intelligence
Structural intelligence engine for codebases. Maps dependency graphs, detects architectural bottlenecks via Bridgeness Centrality, enforces layer boundaries, scores codebase health (0–100), and simulates the ripple effects of changes before you make them. Rust core with MCP server and BM25 search.
Language-agnostic open protocol for streaming, incremental code intelligence. Bridges LSP's runtime-query model with SCIP's static-snapshot model — adding a lazy query graph, blast-radius indexing, and a content-addressed dependency registry. Rust reference impl with Dart bindings.
Automated architecture audits in the CI/CD pipeline. Detects pattern violations, circular dependencies, and structural drift before they become technical debt.
Interactive onboarding system that radically shortens time-to-productivity for new developers by teaching project-specific architecture and conventions.
Knowledge & Data
High-efficiency RAG pipeline. Transforms 25+ formats (PDF, audio, video) into token-optimized knowledge stores. Massively reduces context load while maximizing retrieval precision.
Institutional memory for AI-assisted teams. Preserves tacit project knowledge and human insights across context resets and personnel changes. CLI + MCP server.
High-efficiency context compression for LLMs. Reduces token usage and inference cost while increasing relevance and semantic fidelity.
Zero-dependency TypeScript library for splitting structured formats (JSON, XML, YAML, Markdown) into parts that must survive verbatim and parts that can be compressed. The seam-splitting logic extracted from ContextCompressionEngine as a reusable package. 87 tests. Works in Node, Deno, Bun, and edge runtimes.
Orchestration framework for complex multi-agent systems and swarm intelligence workflows within knowledge processing pipelines.
Adaptive serving layer that detects hardware capabilities (CPU, RAM, GPU) in real-time, enabling servers to deliver tailored content (lite vs. full) per device.
Creative Tools
Science
Games
// contact
Get in Touch
I'm always open to new opportunities, collaborations, or just a good conversation about tech. Feel free to reach out.
Say Hello Buy me a coffee