Skip to content

Obsidian

Building an OpenWebUI Pipeline for Obsidian Integration

OpenWebUI Pipeline

Today I'm excited to share a project that has completely transformed how I interact with my Obsidian knowledge base: a custom OpenWebUI pipeline that lets AI assistants read and write directly to my Obsidian vault.

The Problem

Like many knowledge workers, my Obsidian vault is the central hub for all my notes, projects, and ideas. I've also been using AI assistants like Claude and GPT-4 through OpenWebUI to help with various tasks. However, there was a significant disconnect - the AI assistants couldn't directly access or modify my Obsidian notes.

I found myself constantly copying and pasting between Obsidian and OpenWebUI, which was inefficient and disruptive to my workflow. What I needed was a way to let my AI assistants read from and write to my Obsidian vault directly.

Enter OpenWebUI Pipelines

OpenWebUI (formerly Text Generation WebUI) has a powerful feature called "pipelines" that allows for custom data processing before and after AI interactions. These pipelines can be extended to perform a wide range of tasks, including file operations.

By creating a custom pipeline, I could bridge the gap between my AI assistants and my Obsidian vault, allowing seamless integration between the two systems.

The Solution: Captains Log Pipeline

I built a custom pipeline called "Captains Log Pipeline" that provides a simple command-based interface for interacting with my Obsidian vault directly from OpenWebUI. The pipeline implements the following features:

  • Reading files - Access any note in my Obsidian vault
  • Writing files - Create or overwrite notes with new content
  • Appending to files - Add content to existing notes
  • Creating daily notes - Generate properly formatted daily log entries
  • Listing files - Browse directories within the vault

How It Works

The pipeline is a Python FastAPI service running in a Docker container that mounts my Obsidian vault directory. It exposes an API endpoint that OpenWebUI can connect to, passing user messages for processing.

When a message contains a special command like read notes/project-ideas.md, the pipeline intercepts it, performs the requested operation on my Obsidian vault, and returns the result to the AI assistant.

Available Commands

read [file]           - Read a log file (defaults to today's log)
write file: content   - Write content to a file (overwrite) 
append file: content  - Append content to a file
list [directory]      - List log files (defaults to daily logs)
create                - Create a new daily log
today                 - Check if today's log exists
help                  - Display this help message

Technical Implementation

Let's dive into how this was built:

1. The Core Pipeline Class

The heart of the system is a Python class that implements the pipeline interface:

class Pipeline:
    class Valves(BaseModel):
        # Configuration options for the pipeline
        CAPTAINS_LOG_PATH: str = os.getenv("CAPTAINS_LOG_PATH", "/path/to/obsidian")
        DAILY_LOG_PATH: str = os.getenv("DAILY_LOG_PATH", "/path/to/obsidian/daily")
        MAX_FILES: int = int(os.getenv("MAX_FILES", "100"))
        DEBUG: bool = os.getenv("CAPTAINS_LOG_DEBUG", "false").lower() == "true"

    def pipe(self, user_message: str, model_id: str, messages: List[dict], body: dict):
        """Process the user message and perform file operations"""
        # Parse the command
        command_parts = user_message.strip().split(" ", 1)
        command = command_parts[0].lower()
        
        # Process commands (read, write, append, list, etc.)
        if command == "read":
            # Read file implementation
            # ...
        elif command == "write":
            # Write file implementation
            # ...
        # Other commands...

2. Docker Container

The pipeline runs in a Docker container with access to my Obsidian vault:

version: '3'

services:
  captains-log-pipeline:
    build:
      context: .
      dockerfile: Dockerfile
    container_name: captains-log-pipeline
    restart: unless-stopped
    ports:
      - "9098:9099"
    volumes:
      - /path/to/obsidian:/mnt/obsidian:rw
    environment:
      - CAPTAINS_LOG_PATH=/mnt/obsidian
      - DAILY_LOG_PATH=/mnt/obsidian/daily
      - MAX_FILES=100
      - CAPTAINS_LOG_DEBUG=false

3. FastAPI Server

A FastAPI server exposes the pipeline API:

@app.post("/pipe/{pipeline_id}")
async def pipe(pipeline_id: str, request: Request):
    """Process a message through a pipeline."""
    if pipeline_id not in pipelines:
        raise HTTPException(status_code=404, detail=f"Pipeline {pipeline_id} not found.")
        
    pipeline = pipelines[pipeline_id]
    
    # Get the request data
    data = await request.json()
    
    # Extract the parameters
    user_message = data.get("user_message", "")
    model_id = data.get("model_id", "")
    messages = data.get("messages", [])
    body = data.get("body", {})
    
    # Process the message through the pipeline
    result = pipeline.pipe(user_message, model_id, messages, body)
    
    return {"result": result}

4. OpenWebUI Integration

Finally, I configured OpenWebUI to use this pipeline by adding the pipeline URL to its configuration:

environment:
  - PIPELINES_URL=http://captains-log-pipeline:9099

The Result: A Seamless AI-Obsidian Workflow

Now when I'm working with AI assistants in OpenWebUI, I can seamlessly access and modify my Obsidian vault. Here's an example conversation:

Me: What's in my daily log for today?

AI: Let me check...
> read today
Today's log exists at: /mnt/obsidian/daily/2025-03-14.md

Your daily log contains notes about the DS Asset Integration Project, including integration testing plans, equipment export simulation, and test documentation structure. Would you like me to summarize any specific part of it?

Me: Add a new section about the blog post I'm writing

AI: I'll add that section for you.
> append today: ## Blog Post: OpenWebUI Pipeline
> 
> Working on a blog post about the new OpenWebUI pipeline for Obsidian integration. Key points to cover:
> - Problem statement: disconnect between AI assistants and knowledge base
> - Solution: custom pipeline for direct integration
> - Technical implementation details
> - Example use cases and workflow improvements

Added the new section to your daily log!

This workflow has dramatically improved my productivity and the utility of AI assistants in my daily work.

Future Enhancements

While the current implementation covers my basic needs, there are several enhancements I'm planning:

  1. Search functionality across files using Obsidian's search syntax
  2. Tag management to add/remove tags from notes
  3. Linking between notes to create and manage connections
  4. Template support for creating various note types
  5. Advanced security with API key authentication

Conclusion

Building this OpenWebUI pipeline has been a game-changer for my productivity. By creating a bridge between my AI assistants and my knowledge base, I've removed a significant friction point in my workflow.

If you're using both Obsidian and AI assistants, I highly recommend exploring this integration. The full source code for this project is available on GitHub, so feel free to adapt it to your own workflow.

Let me know in the comments if you have any questions or suggestions for improving this integration!


This blog post was written with the assistance of Claude, which was able to read and reference my Obsidian notes directly through the very pipeline described in this post!