accessibility.skipToMainContent
Resumen
Dweve Mesh

Documentación de Dweve Mesh

Complete documentation covering distributed infrastructure, hybrid privacy architecture, and federated learning will be available in all supported languages upon our public launch. This preview demonstrates core architectural concepts and European AI sovereignty strategy.

Dweve Mesh

Distributed AI infrastructure transforming Europe's idle computing devices into sovereign AI capacity. Run discrete AI workloads across distributed nodes with hybrid privacy, efficient low-bit computation, and transparent token-based pricing.

3-Tier
Architecture (Coordination, Compute, Edge)
1-8 bit
Discrete AI (CPU, GPU, FPGA, NPU)
96% less
Energy consumption vs traditional AI

European AI sovereignty through distributed computing

Europe faces a strategic challenge in artificial intelligence infrastructure: most datacenters on European soil are owned and operated by American and Chinese companies. This creates dependencies on foreign infrastructure for critical AI workloads, compromising data sovereignty and strategic autonomy. Traditional responses like building competing datacenter networks require massive capital investment that struggles to match entrenched hyperscale providers.

Dweve Mesh pursues a fundamentally different strategy. Europe possesses an underutilized strategic asset: more home and office computing devices than any other continent. These machines (desktops, workstations, servers) sit idle 10 to 16 hours daily, consuming electricity while performing no useful work. The key insight: our discrete AI technology (Core, Loom, Nexus) runs efficiently on standard CPUs through low-bit computation (1 to 8 bits), with additional GPU, FPGA, and NPU support for acceleration. Distributed workloads execute effectively across home hardware, transforming Europe's wasted electricity into productive AI infrastructure.

The mesh operates in two deployment modes. Public mesh (Dweve managed) enables organizations and individuals to contribute idle compute capacity, earning API tokens for processing requests when their hardware sits unused. Users consume these tokens for AI services in a simple credit system where 1 token equals 1 complete word (not subtokens). There is no trading, no blockchain, no cryptocurrency, just transparent usage-based pricing.Private mesh deployments create completely isolated networks under customer control: factory floor sensors processing data without internet connectivity, research institutions ensuring sensitive data never leaves their premises, autonomous drone fleets forming self-contained coordination networks.

Hybrid privacy architecture balances security with efficiency. Private data (personal information, proprietary business data) is processed exclusively on local nodes and never enters the mesh. Non-private computational tasks like exploring unknown constraints in binary solution spaces, confirming common constraints, executing web searches, and general-purpose inference distribute across mesh nodes for optimal performance. This approach satisfies GDPR requirements while enabling collaborative AI that would be impossible with purely centralized architectures, delivering European data sovereignty through technical design rather than policy aspiration.

Sophisticated 3-tier architecture

The mesh operates through three specialized tiers optimized for distributed discrete AI workloads: Coordination for global network orchestration and consensus, Compute for intensive reasoning and federated learning, and Edge for local inference on contributor devices. This architecture achieves 96% less energy consumption through efficient low-bit computation (1-8 bits).

1. Coordination Tier: Network orchestration and consensus

Manages global network state and maintains consensus across the distributed infrastructure. Handles network topology optimization for efficient routing and global load balancing. Consensus mechanisms maintain 99% accuracy while performance monitoring tracks real-time mesh health, node availability, and computational efficiency across all tiers.

2. Compute Tier: Distributed reasoning and learning

Handles complex reasoning tasks with 10 to 20 times speedup through discrete computation. Distributed training pools enable collaborative model improvement via federated learning. Each node trains on local data and contributes encrypted gradient updates to the global model without centralizing sensitive information. Achieves 96% less energy consumption through efficient low-bit operations (1 to 8 bits) across CPU, GPU, FPGA, and NPU backends.

3. Edge Tier: Local inference on contributor devices

Desktop workstations and home hardware contribute compute capacity during idle periods (10 to 16 hours daily). Users earn API tokens by processing inference requests when their machines sit unused, transforming wasted electricity into productive AI infrastructure. Discrete AI efficiency makes participation practical on standard CPUs without specialized hardware.

Edge servers and research facilities provide distributed inference capacity closer to data sources. Organizations deploy regional nodes for reduced latency while maintaining data locality and GDPR compliance. Universities and research labs contribute resources in exchange for discounted or free access to Dweve products, creating collaborative research networks.

Hybrid privacy: local processing meets distributed efficiency

Private data stays local. Personal information, proprietary business data, and sensitive content are processed exclusively on your local nodes through technical design rather than contractual promises.

Non-private computation distributes. General-purpose inference, constraint exploration in binary solution spaces, web searches, and collaborative learning tasks distribute across mesh nodes for optimal performance. Federated learning enables collaborative model improvement where nodes train on local data and contribute only encrypted gradient updates to global models, never centralizing the underlying information.

This hybrid approach balances security with efficiency: sensitive operations remain under your direct control, while resource-intensive non-private tasks leverage the full computational capacity of the distributed mesh. Organizations get both complete privacy for confidential data and the performance benefits of distributed processing for general workloads.

Network communication and deployment

Modular peer-to-peer networking via libp2p provides flexible transport support across TCP, QUIC, WebRTC, and other protocols. Nodes communicate over whatever transports work in their environment with built-in security through TLS and Noise protocols for encrypted, authenticated connections. NAT traversal techniques ensure mesh participants can connect despite restrictive firewalls and residential networks.

Distributed discovery through Kademlia DHT enables peer and content discovery without central directories. The self-organizing topology adapts as nodes join and leave, with redundant storage ensuring reliability despite node failures. Gossip protocols disseminate model updates and network events efficiently across the distributed infrastructure.

Flexible deployment options for contributors: Native installation provides maximum performance for dedicated nodes running continuously. WebAssembly in browser enables consumers to contribute idle compute directly through our web application without installing software, making participation accessible to anyone with a modern browser. Both options earn API tokens for processing inference requests during idle periods.

Getting started

The documentation covers comprehensive decentralized AI infrastructure across 14 major sections:

Getting Started: Installation and quickstart
Foundational Architecture: 3-tier design and principles
Deployment Modes: Public managed vs. private isolated meshes
Hybrid Privacy: Local private processing and distributed compute
Network Protocols: libp2p, Kademlia DHT, gossip
Token Credit System: API tokens and compute contribution
Security and Compliance: GDPR, data sovereignty, encryption
Performance: 96% less energy vs traditional AI
API Reference: Rust SDK, REST, gRPC documentation
Deployment Guide: Native and WebAssembly node setup
Advanced Networking: Consensus, NAT traversal, fault tolerance
Production Operations: Monitoring, maintenance, best practices
Implementation Status: Current capabilities and roadmap

Begin with the Getting Started guide in the navigation menu, or jump directly to specific sections using the comprehensive structure above.

Integration interfaces

Rust SDK (native)

The mesh core implements in Rust for performance, memory safety, and concurrent execution guarantees. The native SDK provides complete control over node configuration, protocol participation, and custom extensions. Build custom node types, implement domain-specific coordination logic, or integrate mesh capabilities into existing Rust applications. Full access to libp2p primitives, cryptographic operations, and network protocols. Available under proprietary licensing.

REST API

HTTP REST API provides language-agnostic mesh access. Submit AI jobs, query network status, manage node participation, and retrieve results through standard HTTP endpoints. OpenAPI specifications document all operations. JSON payloads ensure broad language compatibility. Supports webhook notifications for asynchronous job completion, network events, and status updates.

gRPC API

High-performance gRPC interface optimized for low-latency node communication and real-time coordination. Binary Protocol Buffers serialization reduces overhead. Bidirectional streaming enables efficient federated learning round coordination. Protocol Buffer definitions allow client generation for any language with gRPC support. Ideal for performance-critical integrations and high-throughput deployments.

CLI tools

Command-line tools for node operators and network administrators. Start and configure nodes, monitor mesh health, debug connectivity issues, and perform administrative operations. Interactive TUI (terminal user interface) provides real-time visualization of network status, job execution, and resource utilization. Scriptable commands enable automation and integration with existing DevOps workflows.

Última actualización: September 1, 2025
Paso 1 de 425%

Únete a la lista de espera de Dweve

Obtén acceso anticipado a una IA que respeta tu privacidad, el planeta y tu bolsillo.

Beneficios de acceso anticipado

15 % de descuento de por vida
Bloquea el precio de fundador para siempre
Ventana de activación de 3 meses
Fecha de inicio flexible a tu conveniencia
Soporte prioritario
Acceso directo a nuestro equipo de despliegue

Proceso rápido

1
¿Qué te describe mejor?
Particular u organización
2
Comparte tus datos de contacto
Así podremos ponernos en contacto contigo
3
Describe tus necesidades
Ayúdanos a personalizar tu experiencia

Menos de 2 minutos • No se requiere tarjeta