The Socket Runtime is pre-release, please track and discuss issues on GitHub.

Socket Runtime

Write once, run anywhere, connect everyone

Socket Runtime is like a next-generation Electron. Build native apps for any OS using HTML, CSS and JavaScript, for desktop & mobile! You can also connect your users, and let them communicate directly, without the cloud or any servers!

Features

Local First

A full featured File System API & Bluetooth ensure its possible to create excellent offline and local-first user experiences.

P2P & Cloud

Built to support a new generation of apps that can connect directly to each other by providing a high performance UDP API.

Use any backend

Business logic can be written in any language, Python, Rust, Node.js, etc. The backend is even completely optional.

Use any frontend

All the standard browser APIs are supported, so you can use your favorite front end framework to create your UIs, React, Svelte, Vue for example.

Maintainable

Socket itself is built to be maintainable. Zero dependencies and a smaller code base than any other competing project.

Lean & Fast

Uses a smaller memory footprint and creates smaller binaries than any other competing project.

Getting Started

Install

npm macOS Linux Windows npm install @socketsupply/socket -g curl -s -o- https://sockets.sh/sh | bash -s curl -s -o- https://sockets.sh/sh | bash -s iwr -useb https://sockets.sh/ps | iex

Create Socket App

This is similar to React's Create React App. The idea is that it provides a few basic boilerplates and some strong opinions so you can get coding on a production-quality app as quickly as possible.

$npx create-socket-app -h

usage: create-socket-app [react | svelte | tonic | vanilla | vue]
$npx create-socket-app

Creating socket files...OK
Initializing npm package...OK
Installing dependencies...OK
Adding package scripts...OK
Updating project configuration...OK
Copying project boilerplate...OK

Type 'npm start' to launch the app

$tree
.
├── README.md
├── build.js
├── package.json
├── socket.ini
├── src
│   ├── icon.png
│   ├── index.css
│   ├── index.html
│   └── index.js
└── test
    ├── index.js
    └── test-context.js

Anatomy of a Socket App

Android
iOS
Windows
Linux
MacOS
Socket Runtime
JS
HTML
CSS
Sub Process
Mobile or Desktop UI Hello, World

Some apps do computationally intensive work and may want to move that logic into a sub-process. That sub-process will be piped to the render process, so it can be any language.

This is what you see on your screen when you open an app either on your phone or your desktop.

The Socket CLI tool builds and packages and manages your application's assets. The runtime abstracts the details of the operating system so you can focus on building your app.

This is plain old JavaScript that is loaded by the HTML file. It may be bundled. It runs in a browser-like environment with all the standard browser APIs.

This is plain old CSS that is loaded by the HTML file.

This is plain old HTML that is loaded by the Socket Runtime.

JavaScript HTML CSS Sub Process
import fs from 'socket:fs/promisies'

window.addEventListener('DOMContentLoaded', async () => {
  console.log(await fs.readFile('index.html'))
})
<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1" />
    <link rel="stylesheet" href="./index.css" />
  </head>
  <body>
    <h1>Hello, World</h1>
    <script type="module" src="./index.js"></script>
  </body>
</html>
h1 {
  text-transform: uppercase;
}
// This can be any program that can reads stdin and writes to stdout.
// Unlike Electron or other frameworks, this is completely optional.
// This is an example of using a javascript runtime as a sub process.

import { Message } from '@socketsupply/socket-api/ipc.js'
import pipe from '@socketsupply/node-pipe'

pipe.on('data', data => {
  pipe.write(data)
})

pipe.write(Message.from('setTitle', { value: 'hello' }))

Next Steps

The same codebase will run on mobile and desktop, but there are some features unique to both. Ready to dive a bit deeper?

Mobile → Desktop →

Apple Guide

The following is a guide for building apps on Apple's MacOS and iOS operating systems.

Prerequisites

  1. Sign up for a (free) Apple Developer account.
  2. Register your devices for testing. You can use ssc list-devices command to get your Device ID (UDID). The device should be connected to your mac by wire.
  3. Create a wildcard App ID for the application you are developing.
  4. Write down your Team ID. It's in the top right corner of the website. You'll need this later.

MacOS

  • Xcode Command Line Tools. If you don't have them already, and you don't have xcode, you can run the command xcode-select --install.

iOS

Code Signing Certificates

  • Open Keychain Access application on your mac (it's in Applications/Utilities).
  • In the Keychain Access application choose Keychain Access -> Certificate Assistant -> Request a Certificate From a Certificate Authority...
  • Type your email in the User Email Address field. Other form elements are optional.
  • Choose Request is Saved to Disc and save your certificate request.

MacOS

Signing software on MacOS is optional but it's the best practice. Not signing software is like using http instead of https.

  • Create a new Developer ID Application certificate on the Apple Developers website.
  • Choose a certificate request you've created 2 steps earlier.
  • Download your certificate and double-click to add it to your Keychain.

iOS

To run software on iOS, it must be signed by Xcode using the certificate data contained in a "Provisioning Profile". This is a file generated by Apple and it links app identity, certificates (used for code signing), app permissions, and physical devices.

  • Create a new iOS Distribution (App Store and Ad Hoc) certificate on the Apple Developers website.
  • Choose a certificate request you've created 2 steps earlier.
  • Download your certificate and double click to add it to your Keychain.

When you run ssc build --target=ios . on your project for the first time, you may see the following because you don't have a provisioning profile:

ssc build --target=ios .
• provisioning profile not found: /Users/chicoxyzzy/dev/socketsupply/birp/./distribution.mobileprovision. Please specify a valid provisioning profile in the ios_provisioning_profile field in your `ssc.config`
  • Create a new Ad Hoc profile. Use the App ID you created with the wildcard.
  • Pick the certificate that you added to your Keychain two steps earlier.
  • Add the devices that the profile will use.
  • Add a name for your new distribution profile (we recommend naming it "distribution").
  • Download the profile and double-click it. This action will open Xcode. You can close it after it's completely loaded.
  • Place your profile in your project directory (same directory as ssc.config). The profiles are secret, add your profile to .gitignore.

Configuration

MacOS

You will want to ensure the following fields are filled out in your ssc.config file. They will look something like this...

mac_team_id: Z3M838H537
mac_sign: Developer ID Application: Operator Tools Inc. (Z3M838H537)

iOS

  1. Set the ios_distribution_method value in ssc.config to the ad-hoc
  2. Set the ios_codesign_identity value in ssc.config to the certificate name as it's displayed in the Keychan or copy it from the output of security find-identity -v -p codesigning
  3. Set the ios_provisioning_profile value in ssc.config to the filename of your certificate (i.e., "distribution.mobileprovision").

Development

Create a simulator VM and launch the app in it

ssc build --target=iossimulator -r .

Distribution And Deployment

ssc build --target=ios -c -p -xd .

To your device

Install Apple Configurator, open it and install Automation Tools from the menu.

Connect your device and run ssc install-app <path> where path is the root directory of your application (the one where ssc.config is located).

An alternative way to install your app is to open the Apple Configurator app and drag the inner /dist/build/[your app name].ipa/[your app name].ipa file onto your phone.

To the Apple App Store

xcrun altool --validate-app \
  -f file \
  -t platform \
  -u username [-p password] \
  [--output-format xml]
xcrun altool --upload-app \
  -f file \
  -t platform \
  -u username [-p password] \
  [—output-format xml]

Debugging

Check the [troubleshooting guide][/troubleshooting] first. You can also run lldb and attach to a process, for example...

process attach --name TestExample-dev

Logging

To see logs on either platform, open Console.app (installed on MacOS by default) and in the right side panel pick the device or computer name.

Working with the file system on iOS

iOS Application Sandboxing has a set of rules that limits access to the file system. Apps can only access files in their own sandboxed home directory.

Directory Description
Documents The app’s sandboxed documents directory. The contents of this directory are backed up by iTunes and may be set as accessible to the user via iTunes when UIFileSharingEnabled is set to true in application's info.plist.
Library The app’s sandboxed library directory. The contents of this directory are synchronized via iTunes (except the Library/Caches subdirectory, see below), but never exposed to the user.
Library/Caches The app’s sandboxed caches directory. The contents of this directory are not synchronized via iTunes, and may be deleted by the system at any time. It's a good place to store data which provides a good offline-first experience for the user.
Library/Preferences The app’s sandboxed preferences directory. The contents of this directory are synchronized via iTunes. Its purpose is to be used by the Settings app. Avoid creating your own files in this directory.
tmp The app’s sandboxed temporary directory. The contents of this directory are not synchronized via iTunes, and may be deleted by the system at any time. Although, it's recommended that you delete data that is not necessary anymore manually to minimize the space your app takes up on the file system. Use this directory to store data that is only useful during the app runtime.

Desktop Guide

Getting Started

Open a terminal, navigate to where you keep your code. Create a directory and initilaize it.

ssc init

Mobile Guide

Develop & Debug Cycle

You'll want to write code, see it, change it, and repeat this cycle. So the typical approach is to create a watch script that rebuilds your files when there are changes. If you provide a port, the ssc command will try to load http://localhost.

ssc build -r --port=8000

You'll need to tell your build script the output location. The ssc command can tell you the platform specific build destination. For example.

./myscript `ssc list-build-target .`

Running the Mobile Simulator

After you get your UI looking how you want. The next step is to try it out on the simulator. At this point, we can use either the -ios or -android flags as well as the -simulator flag. This will create a platform specific bundle, create and boot a simulator VM and then run your app in simulator if -r flag is provided.

ssc build --target=iossimulator -r

Debugging in the Mobile Simulator

You can use Safari to attach the Web Inspector to the Simulator. In the Safari menu, navigate to Develop -> Simulator -> index.html. This will be the exact same inspector you get while developing desktop apps.

Next Steps

The JavaScript APIs are the same on iOS and Android, check out the API docs.

Peer To Peer

Why

There are broadly 3 categories of concerns addressed by P2P.

Complexity

One server to many users is a natural bottle-neck, and scaling them up quickly becomes a complex distribued system of shared state. A P2P network is many users to many users, and although it is also an eventually consistent, distribued system of shared state, the total complexity of all components needed to create a P2P network are finite, transparent and can be formally verified.

Security & Sovereignty

Cloud (even Gov. Cloud) are closed systems, owned by third parties. There is always a non-zero chance for a greater number of incidents than if the network, data, and software was entirely self-contained without a man-in-the-middle.

Cost

Servers, and cloud infrastructure in general, become more expensive as demand increases. Although "free tiers" can be cheap for trivial, they can be prohibitive for many kinds of businesses.

Use Cases

The protocol is limited to connecting peers and delivering datagrams with a high degree of reliably. It sits directly on top of UDP and is suitable for building applications and protocols. Uses case examples include chat, social networks, mail, photo and file sharing, distribued computing.

How

The dependency for this api will be published 02.01.2023

import { Peer, sha256 } from 'socket:peer'

const peer = new Peer({ publicKey, privateKey })

peer.onPacket = (packet, port, address) => {
  console.log(packet)
}

peer.init()

peer.publish({
  topicId: await sha256('greetings'),
  message: 'hello, world'
})

TODO link to the API docs and explain this code more.

NAT Traversal

Problem Description

With client-server architecture, any client can directly request a response from a server. However in peer-to-peer architecture, any client can not request a response from any other client.

P2P software needs to listen for new messages from unknown people. However most routers will discard unsolicited packets. It is possible to work around this problem.

P2P connectivity requires coordination between three or more peers. Consider the following analogy.

Alice wants to write a letter to Bob. But Bob moves frequently, and she has no other way to contact him. Before Alice can send Bob the letter, she needs to learn his address. The solution is for her to ask some friends who might have talked to Bob recently.

In this anology, Alice's letter is really a packet of data. Bob's address is his external IP address and port. And their friends, are a serializable list of recently known IP addresses and ports. You can read more about the technical details in the STATE_0 section of the spec.

Small Networks

Problem Description

Imagine a scenario where Alice and Bob are sending each other messages. Alice goes offline. Bob continues to write messages to Alice. And because the App Bob is using has a good "local first" user experience, it appears to Bob as if everything is fine and that Alice should eventually receive all his messages, so he also goes offline. When Alice comes online, she doesn’t see Bob’s most recent messages because he’s not online to share them.

Problem Summary & Solution Deficits

This is a very common problem with P2P, it’s sometimes called the Small Network Problem. How well data can survive in a network with this problem is referred to as "partition tolerance". This problem is often solved by using STUN and TURN (relay servers), but these add additional infrastructure that comes at a cost (both in terms of time, money, and expertise).

Solution

The way Stream Relay Protocol solves this problem is with a shared, global packet caching strategy. Every peer in the entire network allocates a small budget (16Mb by default) for caching packets from any other peer in the network.

A peer will cache random packets with a preference for packets that have a topic ID that the peer subscribes to. A packet starts out with a postage value of 16 (A peer will never accept a packet with a postage value greater than 16). When a packet nears its max-age, it is re-broadcast to 3 more random peers in the network, each taxing its postage by a value of 1 when received, unless its postage value is 0, in which case it is no longer re-broadcast and is purged from the peer’s cache.

When a message is published, it is also re-broadcast by at least 3 random peers, with a preference for the intended recipient and peers that subscribe to the same topic. The result of this is a high r-value (or r0, also known as Basic Reproduction Ratio in epidemiology).

Solution Summary & Solution Gains

In simulations, a network of 128 peers, where the join and drop rate are >95%, +/-30% of NATs are hard, and only +/-50% of the peers subscribe to the topic ID; an unsolicited packet can replicate to 100% of the subscribers in an average of 2.5 seconds, degrading to only 50% after a 1000% churn (unrealistically hostile network conditions) over a >1 minute period.

This strategy also improves as the number of participants in the network increases, since the size of the cache and time packets need to live in it is reduced. If we revisit our original problem with this strategy, we can demonstrate a high degree of partition tolerance. It can be said that the peers in a network act as relays (hence the name of the protocol).

Bob continues to write (optionally encrypted) messages to Alice after she goes offline, and his packets are published to the network. A network of only 1024 peers (split across multiple apps), will provide Bob’s packets a 100% survival rate over a 24 hour period, without burdening any particular peer with storage or compute requirements. Allowing Alice to return to the network and see Bob’s most recent messages without the need for additional infrastructure.

Cache Negotiation

TODO

Connectionless

TCP is oftern thought of as an ideal choice for packet delivery since it's considered "reliable". With TCP packet loss, all packets are withheld until all packets are received, this can be a delay of up to 1s (as per RFC6298 section 2.4). If the packet can't be retransmitted, an exponential backoff could lead to another 600ms of delay needed for retransmission.

In fact, Head-of-Line Blocking is generally a problem with any ordered stream, TCP (or UDP with additional higher level protocol code for solving this problem).

TCP introduces other unwanted complexity that makes it less ideal for P2P.

UDP is only considered "unreliable" in the way that packets are not guaranteed to be delivered. However, UDP is ideal for P2P networks because it’s message oriented and connectionless (ideal for NAT traversal). Also because of its message oriented nature, it’s light weight in terms of resource allocation. It's the responsibility of a higher level protocol to implement a strategy for ensuring UDP packets are delivered.

Stream Relay Protocol eliminates Head-of-Line blocking entirely by reframing packets as content-addressable Doubly-Linked lists, allowing packets to be delivered out of order and become eventually consistent. Causal ordering is made possible by traversing the previous ID or next ID to determine if there were packets that came before or after one that is known.

And in the case where there is loss (or simply missing data), the receiver MAY decide to request the packet. If the peer becomes unavailable, query the network for the missing packet.

The trade-off is more data is required to re-frame the packet. The average MTU for a UDP packet is ~1500 bytes. Stream Relay Protocol uses ~134 bytes for framing, allocating 1024 bytes of application or protocol data, which is more than enough.

Further Reading

See the specification and the source code for more details.

Troubleshooting

macOS

Cashes

To produce a meaningful backtrace that can help debug the crash, you'll need to resign the binary with the ability to attach the lldb debugger tool. You'll also want to enable core dumps in case the analysis isn't exaustive enough.

sudo ulimit -c unlimited # enable core dumps (`ls -la /cores`)
/usr/libexec/PlistBuddy -c "Add :com.apple.security.get-task-allow bool true" tmp.entitlements
codesign -s - -f --entitlements tmp.entitlements ./path/to/your/binary
lldb ./path/to/your/binary # type `r`, then after the crash type `bt`

Clock Drift

If you're running from a VM inside MacOS you may experience clock drift and the signing tool will refuse to sign. You can set sntp to refresh more frequently with the following command...

sudo sntp -sS time.apple.com

macOS asks for password multiple times on code signing

Open Keychain Access and find your developer certificate under the My Certificates section. Expand your certificate and double click on a private key. In the dialog click Access Control tab.

codesign utility is located in the /usr/bin/codesign. To add it to the allowed applications list click the "+" button to open File Dialog, then press ⌘ + Shift + G and enter /usr/bin. Select codesign utility fom Finder.

Build or compile failures

aclocal / automake: command not found

To build ssc for ios you need automake / libtool installed.

brew install automake
brew install libtool

unable to build chain to self-signed root for signer (...)

You need the intermediate certificate that matches your code signing certificate. To find which "Worldwide Developer Relations" matches you certificate, open the signing certificate in your keychain, open this page and find the certificate that matches the details in the "Issuer" section of your certicicate.

xcrun: error: SDK "iphoneos" cannot be located

You have to configure the xcode command line tools, to do this you can run the following command

sudo xcode-select --switch /Applications/Xcode.app

fatal error: 'lib/uv/include/uv.h' file not found

Make sure your local ssc binary has been compiled with ios parameter in ./bin/install.sh ios, otherwise the uv.h does not exist.

unable to find utility simctl

You need to have XCode installed on your macbook.

You have not agreed to the Xcode license agreements, please run 'sudo xcodebuild -license' from within a Terminal window to review and agree to the Xcode license agreements.

You can run sudo xcodebuild -license to agree to the license.

Multiple Password Prompts

If macOS is asking you a password every time you run the command with -c flag, follow these instructions

Application crashes on start

If you use iTerm2 you can get your app crashing with

This app has crashed because it attempted to access privacy-sensitive data without a usage description. The app's Info.plist must contain an NSBluetoothAlwaysUsageDescription key with a string value explaining to the user how the app uses this data.

Command line apps inherit their permissions from iTerm, so you need to grant Bluetooth permission to iTerm in macOS system preferences. Go to Security & Privacy, open the Privacy tab and select Bluetooth. Press the "+" button and add iTerm to the apps list.

Windows

Development Environment

clang++ version 14 required for building.

You will need build tools

The WebView2LoaderStatic.lib file was sourced from this package.

Cannot Run Scripts

If the app cannot be loaded because running scripts is disabled on this system.

./bin/bootstrap.ps1 : File C:\Users\user\sources\socket\bin\bootstrap.ps1 cannot be loaded because running scripts is
disabled on this system. For more information, see about_Execution_Policies at
https:/go.microsoft.com/fwlink/?LinkID=135170.

Then you can follow https://superuser.com/a/106363

  1. Start Windows PowerShell with the "Run as Administrator" option.
  2. set-executionpolicy remotesigned

MSVC

Setting up the MSVC build environment from Git Bash

You can leverage the MSVC build tools (clang++) and environment headers directly in Git Bash by loading it into your shell environment directly. This is possible by running the following command:

source bin/mscv-bash-env.sh

The bin/install.sh shell script should work for compiling the ssc tool. It is also recommneded to initialize this environment when building applications with ssc from the CLI so the correct build tools can be used which ensures header and library paths for the compiler

Linux

Build failures

If you are getting a failure that the build tool cant locate your compiler, try making sure that the CXX environment variable is set to the location of your C++ compiler (which g++, or which c++).

The latest version of MacOS should have installed C++ for you. But on Linux you may need to update some packages. To ensure you have the latest clang compiler and libraries you can try the follwing...

For debian/ubuntu, before you install the packages, you may want to add these software update repos here to the software updater.

Note that clang version 14 is only available on Ubuntu 22.04. Use clang 13 for prior versions of Ubuntu.

Ubuntu

sudo apt install \
  build-essential \
  clang-14 \
  libc++1-14-dev \
  libc++abi-14-dev \
  libwebkit2gtk-4.1-dev

Arch/Manjaro

arch uses the latest versions, so just install base-devel

sudo pacman -S base-devel

Multiple g++ versions

If you've tried running the above apt install and you get an error related Unable to locate package then you can also install multiple versions of G++ on your system.

sudo apt install software-properties-common
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt install gcc-9 g++-9 gcc-10 g++-10 gcc-11 g++-11 gcc-12 g++-12

Then you can set your C++ compiler as g++-12

# Add this to bashrc
export CXX=g++-12

Can't find Webkit

If you run into an error about not finding webkit & gtk like this:

Package webkit2gtk-4.1 was not found in the pkg-config search path.
Perhaps you should add the directory containing `webkit2gtk-4.1.pc'
to the PKG_CONFIG_PATH environment variable
No package 'webkit2gtk-4.1' found
In file included from /home/runner/.config/socket/src/main.cc:9:
/home/runner/.config/socket/src/linux.hh:4:10: fatal error: JavaScriptCore/JavaScript.h: No such file or directory
    4 | #include <JavaScriptCore/JavaScript.h>
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.

Then you will want to install those dependencies

sudo apt-get install libwebkit2gtk-4.1-dev

FAQ

Cross Platform Development

What is a Modern Runtime for Web Apps?

Modern does not refer to how recently any component of the software was written. For example, we use libuv which certainly isn't new. In fact, it's considered boring and stable. It's the approach that's modern.

The client-server model was more relevant when computers were fewer and less powerful. Now we are surrounded by billions of computers that can connect directly to each other, so servers are becoming less relevant no matter how fast they are.

Why not Electron?

Electron's binary size and memory footprint are far from acceptable for most developers. The bulk of the weight comes from the decision to build-in V8 and a custom distribution of node.js.

Why not Tauri, Rust?

Tauri is a project for people who want to write Rust. Socket SDK is for Web Developers who want to create connected apps with HTML, CSS, and JavaScript.

Webview is C++, and so are the platforms that it runs on. The memory safety offered by Rust is great but becomes irrelevant when it's just a thin wrapper around a world of C++. It is possible to write C++ that is as safe as Rust, it's just a hell of a lot harder.

Does Webview render consistently across platforms?

Historically it did not. Now it does.

Is it secure?

Yes. As much as anything else. Just NEVER try to build a browser. NEVER evaluate arbitrary code. ALWAYS use a strong CSP. ALWAYS sanitize any data that will be rendered in a UI.

Will you support a specific feature?

Possibly, create a PR and make an argument for why the feature is relevant to everyone who would use this project.

How can Webview based apps compete with the quality of native apps?

Native apps require an enormous amount of developer effort if the developer wants their app to run across multiple platforms. Socket lowers the barrier to entry and lets in the world's largest developer community. With care, and avoiding bloated frameworks, a web-based app can run as well as any native app.

Peer To Peer

Why should I care that P2P is free? AWS is almost free!

AWS is nearly free until you experience any kind of growth. The cost of The Cloud scales up with the demand for a product. And for most companies, The Cloud becomes their largest cost center. When this cost is combined with non-cloud costs (the cost of experts, their managers, key-person churn, it can make profitability impossible.

How effective are distributed networks at hosting the long tail of rarely-accessed content?

We make it possible to send a receive packets even when peers are offline. But this isn't the same as "free storage". In networks like BitTorrent, rarely-accessed content (content accessed less frequently than 72 hours) becomes unavailable as the few peers hosting that content drop offline.

In this case, we enable developers to build hybrid networks. In hybrid networks, developers can choose to keep a centralized copy of all content. This keeps rarely-accessed content always available. For popular content, a distributed swarm of users’ devices also assist in distribution, reducing the cost of serving that content from a central location.

How can a peer replace a server?

A peer should not be asked to handle the same kind of workloads as a server. If you develop an app that monopolizes a user's device, they will be unhappy, regardless of what architecture you are using. Peers should handle smaller work-loads in shorter bursts.

With Peer To Peer networks, growth increases availability and compute capacity. Despite how many peers join your network, you should continue to design with the assumption that peers are unreliable and infrequently online.