LLAutoLibrary

LLAutoLibrary is an automated personal knowledge-graph system designed for researchers. It transforms raw, unstructured documents into a structured, interconnected “Digital Garden” or wiki. By leveraging local LLMs and vector databases, LLAutoLibrary automatically extracts core concepts from your documents and links them together, allowing for efficient discovery and high-fidelity information retrieval.

Inspired by Andrej Karpathy’s vision of personal AI operating systems and the potential of LLMs as a reasoning layer over structured data, this project aims to bridge the gap between static document storage and dynamic knowledge management.

The Vision

The goal of LLAutoLibrary is to move beyond simple folder structures. In a professional or organizational setting, this system allows an LLM to sit atop a massive knowledge graph, enabling it to generate insights, summaries, and reports based on the explicit relationships between data points rather than just raw text chunks.

Planned Additions

Technology Stack

This project is built with a focus on local privacy, high performance, and modern web standards:

Features

Installation

If you just want to test the LLM functionality, use testing.ipynb

Follow these steps to set up LLAutoLibrary on your local machine.

Prerequisites

ollama pull gemma4:e4b

Windows

python -m venv venv
.\venv\Scripts\activate

Mac/Linux

python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
  1. Frontend Setup Navigate to the frontend directory and install the dependencies:
cd frontend
npm install
  1. Running the Project Start the Engine: Run the Python script to begin processing documents in your /raw folder.
python main.py

Start the UI: In a new terminal, run the development server:

cd frontend
npm run dev

Open your browser to the local URL provided by Vite (usually http://localhost:5173).

Acknowledgements & Inspiration