Quick Start

Get up and running in 5 minutes.

Get started

API Reference

Complete reference for all operations.

View API

Examples

Real-world code examples and recipes.

Browse examples

Tutorials

Step-by-step guides for common tasks.

View tutorials

Quick Start

Get started with MeshNetix in 5 minutes.

Step 1: Create an Account

Sign up at app.meshnetix.com. No credit card required.

Step 2: Create Your First DataFrame

quickstart.py
import unispark as us

# Create from dictionary
df = us.DataFrame({
    "name": ["Alice", "Bob", "Charlie"],
    "age": [25, 30, 35],
    "city": ["NYC", "SF", "LA"]
})

# Or from file
df = us.read_parquet("data.parquet")
df = us.read_csv("data.csv")

Step 3: Transform Your Data

transform.py
# Filter, group, aggregate - familiar API
result = df.filter("age > 25") \
           .groupby("city") \
           .agg(us.count("name"), us.mean("age")) \
           .sort("city")

# View results
print(result.show())

Step 4: Use SQL (Optional)

sql.py
# Run SQL queries on your data
result = us.sql("""
    SELECT city, COUNT(*) as count, AVG(age) as avg_age
    FROM users
    WHERE age > 25
    GROUP BY city
    ORDER BY count DESC
""")

API Reference

Everything you need for any data transformation.

DataFrame Operations

Select, filter, sort, group, aggregate. All the operations you use daily, running faster.

SQL Support

Write SQL queries on your data. Mix DataFrame API and SQL in the same workflow.

Joins & Unions

Inner, left, right, full outer joins. Union, intersect, except. Combine data from anywhere.

Window Functions

Rank, row number, lag, lead, running totals. Complex analytics made simple.

View Full API Docs

Code Examples

ETL Pipeline

Extract, transform, and load data from multiple sources.

# Load from multiple sources
orders = us.read_parquet("orders.parquet")
customers = us.read_csv("customers.csv")

# Join and transform
result = orders.join(customers, "customer_id") \
    .filter("order_date >= '2024-01-01'") \
    .groupby("customer_segment") \
    .agg(us.sum("amount"))

# Save results
result.to_parquet("output.parquet")

Window Functions

Calculate running totals and rankings.

# Running total by category
result = df.with_column(
    "running_total",
    us.sum("sales").over(
        partition_by="category",
        order_by="date"
    )
)

# Rank within groups
result = df.with_column(
    "rank",
    us.row_number().over(
        partition_by="department",
        order_by="salary"
    )
)

Step-by-Step Tutorials

Building Your First Dashboard

Learn how to create interactive dashboards from your data in minutes.

Setting Up Data Pipelines

Create automated data pipelines that run reliably, every time.

Working with Large Datasets

Tips and techniques for handling datasets that don't fit in memory.

Ready to Get Started?

Create your free account and start building with MeshNetix.