The Socket Runtime is pre-release, please track and discuss issues on GitHub.

Socket Runtime

Write once. Run anywhere. Connect everyone.

Socket Runtime is like a next-generation Electron. Build native apps for any OS using HTML, CSS, and JavaScript, for desktop & mobile! You can also connect your users, and let them communicate directly, without the cloud or any servers!

Features

Local First

A full-featured File System API & Bluetooth ensure it's possible to create excellent offline and local-first user experiences.

P2P & Cloud

Built to support a new generation of apps that can connect directly to each other by providing a high-performance UDP API.

Use any backend

Business logic can be written in any language, Python, Rust, Node.js, etc. The backend is even completely optional.

Use any frontend

All the standard browser APIs are supported, so you can use your favorite front-end framework to create your UIs, React, Svelte, Vue for example.

Maintainable

Socket itself is built to be maintainable. Zero dependencies and a smaller code base than any other competing project.

Lean & Fast

Uses a smaller memory footprint and creates smaller binaries than any other competing project.

Getting Started

Install

From Package Manager Or From Source

The easiest way to install Socket Runtime is by using a package manager like npm. If you don't have npm installed but you want to use it, you can download it here.

curl iwr npm pnpm npm i @socketsupply/socket -g

Let your package manager handle all the details (any OS).

pnpm add -g @socketsupply/socket

Let your package manager handle all the details (any OS).

curl -s -o- https://sockets.sh/sh | bash -s

Install by compiling from source (MacOS or Linux).

iwr -useb https://sockets.sh/ps | iex

Install by compiling from source (Windows).

From Create Socket App

Installing Socket Runtime from Create Socket App will be instantly familiar to anyone who has used React's Create React App.

The idea is to provide a few basic boilerplates and some strong opinions so you can get coding on a production-quality app as quickly as possible.

To get started Create an empty directory and try one of the following commands:

npx npm yarn pnpm npx create-socket-app [react | svelte | tonic | vanilla | vue] npm create socket-app [react | svelte | tonic | vanilla | vue] yarn create socket-app [react | svelte | tonic | vanilla | vue] pnpm create socket-app [react | svelte | tonic | vanilla | vue]

After running the command you'll see a directory structure like this...

.
├── README.md
├── build.js
├── package.json
├── socket.ini
├── src
│   ├── icon.png
│   ├── index.css
│   ├── index.html
│   └── index.js
└── test
    ├── index.js
    └── test-context.js

Anatomy of a Socket App

Android
iOS
Windows
Linux
MacOS
Socket Runtime
JS
HTML
CSS
Sub Process
Mobile or Desktop UI Hello, World

Some apps do computationally intensive work and may want to move that logic into a sub-process. That sub-process will be piped to the render process, so it can be any language.

This is what you see on your screen when you open an app either on your phone or your desktop.

The Socket CLI tool builds and packages and manages your application's assets. The runtime abstracts the details of the operating system so you can focus on building your app.

This is plain old JavaScript that is loaded by the HTML file. It may be bundled. It runs in a browser-like environment with all the standard browser APIs.

This is plain old CSS that is loaded by the HTML file.

This is plain old HTML that is loaded by the Socket Runtime.

JavaScript HTML CSS Sub Process
import fs from 'socket:fs/promisies'

window.addEventListener('DOMContentLoaded', async () => {
  console.log(await fs.readFile('index.html'))
})
<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1" />
    <link rel="stylesheet" href="./index.css" />
  </head>
  <body>
    <h1>Hello, World</h1>
    <script type="module" src="./index.js"></script>
  </body>
</html>
h1 {
  text-transform: uppercase;
}
// This can be any program that can reads stdin and writes to stdout.
// Unlike Electron or other frameworks, this is completely optional.
// This is an example of using a javascript runtime as a sub process.

import { Message } from '@socketsupply/socket-api/ipc.js'
import pipe from '@socketsupply/node-pipe'

pipe.on('data', data => {
  pipe.write(data)
})

pipe.write(Message.from('setTitle', { value: 'hello' }))

Next Steps

The same codebase will run on mobile and desktop, but there are some features unique to both. Ready to dive a bit deeper?

Mobile → Desktop →

Apple Guide

The following is a guide for building apps on Apple's MacOS and iOS operating systems.

Prerequisites

  1. Sign up for a (free) Apple Developer account.
  2. Register your devices for testing. You can use ssc list-devices command to get your Device ID (UDID). The device should be connected to your mac by wire.
  3. Create a wildcard App ID for the application you are developing.
  4. Write down your Team ID. It's in the top right corner of the website. You'll need this later.

MacOS

  • Xcode Command Line Tools. If you don't have them already, and you don't have Xcode, you can run the command xcode-select --install.

iOS

Code Signing Certificates

  • Open the Keychain Access application on your mac (it's in Applications/Utilities).
  • In the Keychain Access application choose Keychain Access -> Certificate Assistant -> Request a Certificate From a Certificate Authority...
  • Type your email in the User Email Address field. Other form elements are optional.
  • Choose Request is Saved to Disc and save your certificate request.

MacOS

Signing software on MacOS is optional but it's the best practice. Not signing software is like using http instead of https.

  • Create a new Developer ID Application certificate on the Apple Developers website.
  • Choose a certificate request you created 2 steps earlier.
  • Download your certificate and double-click to add it to your Keychain.

iOS

To run software on iOS, it must be signed by Xcode using the certificate data contained in a "Provisioning Profile". This is a file generated by Apple and it links app identity, certificates (used for code signing), app permissions, and physical devices.

  • Create a new iOS Distribution (App Store and Ad Hoc) certificate on the Apple Developers website.
  • Choose a certificate request you created 2 steps earlier.
  • Download your certificate and double-click to add it to your Keychain.

When you run ssc build --target=ios . on your project for the first time, you may see the following because you don't have a provisioning profile:

ssc build --target=ios .
• provisioning profile not found: /Users/chicoxyzzy/dev/socketsupply/birp/./distribution.mobileprovision. Please specify a valid provisioning profile in the ios_provisioning_profile field in your `ssc.config`
  • Create a new Ad Hoc profile. Use the App ID you created with the wildcard.
  • Pick the certificate that you added to your Keychain two steps earlier.
  • Add the devices that the profile will use.
  • Add a name for your new distribution profile (we recommend naming it "distribution").
  • Download the profile and double-click it. This action will open Xcode. You can close it after it's completely loaded.
  • Place your profile in your project directory (same directory as ssc.config). The profiles are secret, add your profile to .gitignore.

Configuration

MacOS

You will want to ensure the following fields are filled out in your ssc.config file. They will look something like this...

mac_team_id: Z3M838H537
mac_sign: Developer ID Application: Operator Tools Inc. (Z3M838H537)

iOS

  1. Set the ios_distribution_method value in ssc.config to the ad-hoc
  2. Set the ios_codesign_identity value in ssc.config to the certificate name as it's displayed in the Keychain or copy it from the output of security find-identity -v -p codesigning
  3. Set the ios_provisioning_profile value in ssc.config to the filename of your certificate (i.e., "distribution.mobileprovision").

Development

Create a simulator VM and launch the app in it

ssc build --target=iossimulator -r .

Distribution And Deployment

ssc build --target=ios -c -p -xd .

To your device

Install Apple Configurator, open it, and install Automation Tools from the menu.

Connect your device and run ssc install-app <path> where the path is the root directory of your application (the one where ssc.config is located).

An alternative way to install your app is to open the Apple Configurator app and drag the inner /dist/build/[your app name].ipa/[your app name].ipa file onto your phone.

To the Apple App Store

xcrun altool --validate-app \
  -f file \
  -t platform \
  -u username [-p password] \
  [--output-format xml]
xcrun altool --upload-app \
  -f file \
  -t platform \
  -u username [-p password] \
  [—output-format xml]

Debugging

Check the troubleshooting guide first. You can also run lldb and attach to a process, for example...

process attach --name TestExample-dev

Logging

To see logs on either platform, open Console.app (installed on MacOS by default) and in the right-side panel pick the device or computer name.

Working with the file system on iOS

iOS Application Sandboxing has a set of rules that limits access to the file system. Apps can only access files in their own sandboxed home directory.

Directory Description
Documents The app’s sandboxed documents directory. The contents of this directory are backed up by iTunes and may be set as accessible to the user via iTunes when UIFileSharingEnabled is set to true in the application's info.plist.
Library The app’s sandboxed library directory. The contents of this directory are synchronized via iTunes (except the Library/Caches subdirectory, see below), but never exposed to the user.
Library/Caches The app’s sandboxed caches directory. The contents of this directory are not synchronized via iTunes and may be deleted by the system at any time. It's a good place to store data which provides a good offline-first experience for the user.
Library/Preferences The app’s sandboxed preferences directory. The contents of this directory are synchronized via iTunes. Its purpose is to be used by the Settings app. Avoid creating your own files in this directory.
tmp The app’s sandboxed temporary directory. The contents of this directory are not synchronized via iTunes and may be deleted by the system at any time. Although, it's recommended that you delete data that is not necessary anymore manually to minimize the space your app takes up on the file system. Use this directory to store data that is only useful during the app runtime.

Desktop Guide

Getting Started

Open a terminal and navigate to where you keep your code. Create a directory and initialize it.

ssc init

Mobile Guide

Requirements for Android

Download Android Studio.

Requirements for iPhone

Download XCode.

Develop & Debug Cycle

You'll want to write code, see it, change it, and repeat this cycle. So the typical approach is to create a watch script that rebuilds your files when there are changes. If you provide a port, the ssc command will try to load http://localhost.

ssc build -r --port=8000

You'll need to tell your build script the output location. The ssc command can tell you the platform-specific build destination. For example.

./myscript `ssc list-build-target .`

Running the Mobile Simulator

After you get your UI looking how you want. The next step is to try it out on the simulator. At this point, we can use either the -ios or -android flags as well as the -simulator flag. This will create a platform-specific bundle, create and boot a simulator VM and then run your app in the simulator if -r flag is provided.

ssc build --target=iossimulator -r

Debugging in the Mobile Simulator

You can use Safari to attach the Web Inspector to the Simulator. In the Safari menu, navigate to Develop -> Simulator -> index.html. This will be the exact same inspector you get while developing desktop apps.

Next Steps

The JavaScript APIs are the same on iOS and Android, check out the API docs.

Peer To Peer

Socket Runtime provides a modern network API that allows apps to communicate without any server infrastructure requirements.

Goals

There are 3 categories we want to address with this API.

Complexity

Servers are natural bottle-necks (One server to many users), and scaling them up quickly becomes a complex distributed system of shared state. A P2P network is many users to many users, and although it is also an eventually consistent, distributed system of shared state, the total complexity of all components needed to create a rebust P2P network is finite, transparent. And unlike Cloud services, since it's entirely transparent, it can be verified with formal methods by anyone.

Security & Sovereignty

P2P is simply a networking technique, it's not more or less dangerous than any other approach to computer networking. A P2P app is not more likely to contain malicious code or perform unwanted behavior than any web page or client-server application. P2P is entirely transparent which makes it easier to audit.

Conversely, Cloud services are a closed system — owned and operated by a private third party. It's not possible for most people to audit them. With Gov. Cloud, auditing is intermittent. There is always a non-zero chance for a greater number of incidents than if every single bit was self-contained. Fewer moving parts means less risk.

Cost

As your application grows, so do the costs of the services you use. Growth usually means combining services from different providers, and staffing the experts to glue it all together.

Use Cases

Using Socket Runtime, any front-end developer familiar with HTML, CSS, and JavaScript can create a fully functional chat app like Telegram, a social media app like Twitter, or collaborative content creation apps like Figma or Google Docs entirely without the cost, expertise, or complexity required by server infrastructure.

How It Works

Socket Runtime ships with a P2P protocol named "Stream Relay". This is a network protocol designed to help your apps communicate in a way that is disruption tolerant.

import { Peer } from 'socket:peer'

const peer = new Peer()

peer.onConnect = (remotePeer, packet, port, address) => {
  console.log(remotePeer)
}

peer.onPacket = (packet, port, address) => {
  console.log(packet)
}

peer.init()

peer.publish({
  clusterId: 'greetings',
  message: 'hello, world'
})

IP Addresses & Ports

To get your computer or phone connected to the Internet, you get an account from an Internet Service Provider (an ISP). Every device connects to the Internet though a router. You might not think about it if you're on a phone — in your home it's probably a box with the blinking lights!

Everything that connects to the Internet needs an address so that we know who sent what and where replies should be sent. So

Your router is assigned an IP address by your ISP. And your computer is assigned a local IP address by your router. All computers connected to your router get a unique local IP address.

But that's not enough information to start communicating. Imagine your computer is like an office building, it's IP address is like the street address, and every program that runs is like a different office in that building. Each office gets assigned a unique number which we call the "internal port".

Routers & NATs

Now imagine lots of programs running on lots of computers all want to send packets. To manage this, the router maintains a database that maps a program's internal port and the IP address of the computer its running on to a unique external port number, this way the router ensures that inbound packets always reach the program running on the computer that is expecting them. This is called Network Address Translation, or NAT for short. And different router's made by different manufacturers for different purposes can have different behaviours!

Your computer’s local IP address isn't visible to the outside world, and the router’s IP address isn’t visible to your computer, and neither are the port numbers in the router's database. To make things even more difficult, if the router sees inbound packets that don't match an outbound request, it will generally discard them. Different routers may even assign port numbers differently! All this makes it hard to get server-like behavior from your mobile device or laptop. But with P2P, we want server-like beahavor -- we want to listen, a little bit like a server. This is where it starts to get complicated.

Reflection

Before other computers from the Internet can communicate directly with a program on your computer, you need to know what your router's public IP address is, and the external port number that your router has assigned to your program. You also want to know what kind of NAT behavior to expect from your router. The way we determine this, is by asking TWO another peers that are "on the internet" (not behind our router) for our address and port info. If both peers respond with the same port, we are behind an Easy NAT. If both respond with different ports we are behind a Hard NAT. If we respond to an unsolicited query on a well known port, we are behind a Static NAT. We call this process of asking another peer for address information "Reflection".

The peers we ask for this information will only be able to respond if they are behind Easy or Static NATs. We generally call these "introducers", because they are peers that can also introduce us to other peers. An introducer can be an iPhone, a laptop, an EC2 instance, it could be anything as long as it has the correct NAT type. Let's say Alice wants to talk to Bob. If they both tell Catherine their information, she can introduce them. Catherine sends Alice Bob's address, port, and NAT type, and she sends Bob Alice's address and port and NAT type.

NAT Traversal

Now they are ready to start the process of negotiating a connection. This process is called NAT traversal (aka "Hole Punching"). Alice and Bob's NAT types will determine how well this negotiation will go. If Alice and Bob are both on Static NATs, connecting is trivial, there is no need to negotiate. If Alice is on an Easy NAT, she must first send a packet to Bob. This packet will fail to be delivered but will open a port on her router.

Now that the port is open and the router is expecting to see responses addressed to it, Bob can send messages and the router will not consider tham unsolicited. If Alice is on a Hard NAT, the process is slightly more complicated, she opens 256 ports, and Bob sends packets until he successfully establishes a connection. This generally works better than it sounds due to probability. For example, if you survey a random group of just 23 people there is actually about a 50–50 chance that two of them will have the same birthday. This is known as the birthday paradox, and it speeds up this guessing progress so that connection times in this scenario are under a second, once the port is open it can be kept open. And only about a 1/3rd of all NATs are Hard, so connection times are about the same as they are in client-server architectures. Also, this whole process doesn't work if both NATs are Hard.

Note: There is an optimization where you can check if the router supports port mapping protocols such as PmP or PnP, but in our research very few routers respond to queries for these protocols.

Now you have a direct connection and you can try to keep it alive if you want. But it will likely go offline soon, most modern networks are extremely dynamic. Imagine taking your phone out to check Twitter for a few seconds, then putting it back in your pocket. So the next part of our protocol is equally as important as the NAT traversal part.

Disruption Tolerance

In modern P2P networks, all peers should be considered equally unreliable. They may be online in short bursts. They may be online but unreachable. They may be offline for a few seconds, or offline for a month. This protocol anticipates these conditions in its design — any node, or any number of nodes are able to fail and this will not affect the integrity of the overall network.

Let's say that Alice and Bob get connected and want to have a video chat. They can send packets of data directly to each other. They may have minor network interuptions that cause inconsequential packet loss. But what if Alice and Bob are text messaging? A lot of the time this kind of communication is indirect (or asynchronous). If Bob goes offline, how can he receive the subsequent messages that Alice sends? This is where we need to dig into the protocol design.

Protocol Design

The Stream Relay protocol is a "replicating", message-based (UDP), protocol. A replicating protocol casts wide net and yields faster response times and higher hit rates, but wastes more packets. The Epidemic Broadcast Trees paper (Plumtree, Joao Leitao, et. al 2007), defines a metric (Relative Message Redundancy), to measures the message overhead in gossip/replicating protocols. But the paper was published prior to the domenance of mobile devices and their usage patterns. It advocated for the conservation of bandwidth using a "lazy push approach", which as a trade-off, made it slower and introduced central services for calculating trees. With mobile usage patterns, there is a smaller window of time to satisfy a user. So this optimzation is no longer relevant, especially if we factor the declining cost of bandwidth and the increased demand for faster responses.

In the simplest case, Epidemic protocols deliberately make no attempt to eliminate any degree of flooding. However, protocols such as MaxProp (John Burgess Brian Gallagher David Jensen Brian Neil Levine Dept. of Computer Science, Univ. of Massachusetts, Amherst), add optimizations that provide evidence for their claims that replicating protocols can outperform protocols with access to oracles (An oracle being something like a Distributed Hash Table, a Minimum Spanning Tree, or a Broadcast Tree).

We adapt many of MaxProp's optimizations in decisions about packet delivery and peer selection, including optimizations where peers are weighted on their distance (the delta of the time between when a message was sent and a response was received).

The non-exaustive overview is that when a packet is sent indirectly, it is replicated to 3 peers in the network. The peer selection process is weighed on the distance and availability of peers (the delta of the time between when a message was sent and a response was received, as well as the average uptime of a peer). The process of picking which messages to send is based on what is directly requested, but also which packets have the lowest hop count.

Each of those peers in turn will cache the packet for a 6 hour TTL and replicate the packet increasing the packet's hop count until it reaches 16 at which point it is dropped. When the packet expires, it is rebroadcast one last time to 3 random peers in the network. This approach is effective for reaching the majority of intended peers, and in the case a packet isn't delivered, the recipient only needs 1 of N packets to query the network for a missing packet.

Figure A shows a network of ±150 peers in a 15 second window, where each peer has the average lifetime of ±13 seconds. This is a highly dynamic network without any central servers. Peers joining the network are represented by a green line. And peers leaving the network are represented by a red line. In this case we see a total network change of ±16416%100% being the initial size of the network).

Despite these conditions, a packet can reach±96.00% of the subscribers before any queries are needed. A blue line represents the peers that have received the packet, while the other solid line above it reprsents the total number of peers that want to receive it.

The average cost distribution to each peer in the network is ±0.00065Mb, with a message redundancy of ±0.00062%. This is a nominal cost compared to the cost of the average Web page (results varied widely, these are averages over ±50 runs). As with the Plumtree paper, control packets are not factored into the total network cost.


Site Initial Pageload 1st 15s of clicks/scrolling
discord.com ± 28Mb ± 5Mb (and climbing)
twitter.com ± 9Mb ± 18.5Mb (and climbing)
google.com ± 4.5-6Mb ± 70Mb (every click reloads content)
yahoo.com ± 36Mb ± 80+Mb (and climbing)

TODO This section needs more data

Stream Relay's network packets are identified by a sha256 hash of their content, they are also causally linked, making it possible for them to be delivered in any order and then made eventually consistent. This means that Alice can continue to send messags to Bob, even if Bob goes offline. Messages can persist in the network, moving from peer to peer, for potentially thousands of hours.

Why UDP?

TCP is often thought of as an ideal choice for packet delivery since it's considered "reliable". With TCP packet loss, all packets are withheld until all packets are received, this can be a delay of up to 1s (as per RFC6298 section 2.4). If the packet can't be retransmitted, an exponential backoff could lead to another 600ms of delay needed for retransmission.

In fact, Head-of-Line Blocking is generally a problem with any ordered stream, TCP (or UDP with additional higher-level protocol code for solving this problem).

TCP introduces other unwanted complexity that makes it less ideal for P2P.

UDP is only considered "unreliable" in the way that packets are not guaranteed to be delivered. However, UDP is ideal for P2P networks because it’s message oriented and connectionless (ideal for NAT traversal). Also because of its message-oriented nature, it’s light-weight in terms of resource allocation. It's the responsibility of a higher-level protocol to implement a strategy for ensuring UDP packets are delivered.

Stream Relay Protocol eliminates Head-of-Line blocking entirely by reframing packets as content-addressable Doubly-Linked lists, allowing packets to be delivered out of order and become eventually consistent. Causal ordering is made possible by traversing the previous ID or next ID to determine if there were packets that came before or after one that is known.

And in the case where there is loss (or simply missing data), the receiver MAY decide to request the packet. If the peer becomes unavailable, query the network for the missing packet.

The trade-off is more data is required to re-frame the packet. The average MTU for a UDP packet is ~1500 bytes. Stream Relay Protocol uses ~134 bytes for framing, allocating 1024 bytes of application or protocol data, which is more than enough.

Further Reading

See the specification and the source code for more details.

Troubleshooting

macOS

Cashes

To produce a meaningful backtrace that can help debug the crash, you'll need to resign the binary with the ability to attach the lldb debugger tool. You'll also want to enable core dumps in case the analysis isn't exhaustive enough.

sudo ulimit -c unlimited # enable core dumps (`ls -la /cores`)
/usr/libexec/PlistBuddy -c "Add :com.apple.security.get-task-allow bool true" tmp.entitlements
codesign -s - -f --entitlements tmp.entitlements ./path/to/your/binary
lldb ./path/to/your/binary # type `r`, then after the crash type `bt`

Clock Drift

If you're running from a VM inside MacOS you may experience clock drift and the signing tool will refuse to sign. You can set sntp to refresh more frequently with the following command...

sudo sntp -sS time.apple.com

macOS asks for a password multiple times on code signing

Open Keychain Access and find your developer certificate under the My Certificates section. Expand your certificate and double-click on a private key. In the dialog click the Access Control tab.

codesign utility is located in the /usr/bin/codesign. To add it to the allowed applications list click the "+" button to open File Dialog, then press ⌘ + Shift + G and enter /usr/bin. Select codesign utility fom Finder.

Build or compile failures

aclocal / automake: command not found

To build ssc for ios you need automake / libtool installed.

brew install automake
brew install libtool

unable to build chain to self-signed root for signer (...)

You need the intermediate certificate that matches your code signing certificate. To find which "Worldwide Developer Relations" matches your certificate, open the signing certificate in your keychain, open this page, and find the certificate that matches the details in the "Issuer" section of your certicicate.

xcrun: error: SDK "iphoneos" cannot be located

You have to configure the Xcode command line tools, to do this you can run the following command

sudo xcode-select --switch /Applications/Xcode.app

fatal error: 'lib/uv/include/uv.h' file not found

Make sure your local ssc binary has been compiled with ios parameter in ./bin/install.sh ios, otherwise the uv.h does not exist.

unable to find utility simctl

You need to have XCode installed on your MacBook.

You have not agreed to the Xcode license agreements, please run 'sudo xcodebuild -license' from within a Terminal window to review and agree to the Xcode license agreements.

You can run sudo xcodebuild -license to agree to the license.

Multiple Password Prompts

If macOS is asking you for a password every time you run the command with -c flag, follow these instructions

Application crashes on start

If you use iTerm2 you can get your app crash with

This app has crashed because it attempted to access privacy-sensitive data without a usage description. The app's Info.plist must contain an NSBluetoothAlwaysUsageDescription key with a string value explaining to the user how the app uses this data.

Command line apps inherit their permissions from iTerm, so you need to grant Bluetooth permission to iTerm in macOS system preferences. Go to Security & Privacy, open the Privacy tab, and select Bluetooth. Press the "+" button and add iTerm to the apps list.

Windows

Development Environment

clang++ version 14 is required for building.

You will need build tools

The WebView2LoaderStatic.lib file was sourced from this package.

Cannot Run Scripts

If the app cannot be loaded because running scripts is disabled on this system.

./bin/bootstrap.ps1 : File C:\Users\user\sources\socket\bin\bootstrap.ps1 cannot be loaded because running scripts is
disabled on this system. For more information, see about_Execution_Policies at
https:/go.microsoft.com/fwlink/?LinkID=135170.

Then you can follow https://superuser.com/a/106363

  1. Start Windows PowerShell with the "Run as Administrator" option.
  2. set-executionpolicy remotesigned

MSVC

Setting up the MSVC build environment from Git Bash

You can leverage the MSVC build tools (clang++) and environment headers directly in Git Bash by loading them into your shell environment directly. This is possible by running the following command:

source bin/mscv-bash-env.sh

The bin/install.sh shell script should work for compiling the ssc tool. It is also recommended to initialize this environment when building applications with ssc from the CLI so the correct build tools can be used which ensures header and library paths for the compiler

Linux

Build failures

If you are getting a failure that the build tool cant locate your compiler, try making sure that the CXX environment variable is set to the location of your C++ compiler (which g++, or which c++).

The latest version of MacOS should have installed C++ for you. But on Linux you may need to update some packages. To ensure you have the latest clang compiler and libraries you can try the following...

For debian/ubuntu, before you install the packages, you may want to add these software update repos here to the software updater.

Note that clang version 14 is only available on Ubuntu 22.04. Use clang 13 for prior versions of Ubuntu.

Ubuntu

sudo apt install \
  build-essential \
  clang-14 \
  libc++1-14-dev \
  libc++abi-14-dev \
  libwebkit2gtk-4.1-dev

Arch/Manjaro

arch uses the latest versions, so just install base-devel

sudo pacman -S base-devel

Multiple g++ versions

If you've tried running the above apt install and you get an error related to Unable to locate package then you can also install multiple versions of G++ on your system.

sudo apt install software-properties-common
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt install gcc-9 g++-9 gcc-10 g++-10 gcc-11 g++-11 gcc-12 g++-12

Then you can set your C++ compiler as g++-12

# Add this to bashrc
export CXX=g++-12

Can't find Webkit

If you run into an error about not finding webkit & gtk like this:

Package webkit2gtk-4.1 was not found in the pkg-config search path.
Perhaps you should add the directory containing `webkit2gtk-4.1.pc'
to the PKG_CONFIG_PATH environment variable
No package 'webkit2gtk-4.1' found
In the file included from /home/runner/.config/socket/src/main.cc:9:
/home/runner/.config/socket/src/linux.hh:4:10: fatal error: JavaScriptCore/JavaScript.h: No such file or directory
    4 | #include <JavaScriptCore/JavaScript.h>
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.

Then you will want to install those dependencies

sudo apt-get install libwebkit2gtk-4.1-dev

FAQ

What is Socket Runtime?

Socket Runtime is a free and open-source client-side Runtime that helps web developers build apps for any OS, desktop, and mobile. You can use plain old HTML, CSS, and JavaScript, as well as your favorite front-end libraries for example React, Svelte, and Vue.

Socket ships binaries that start at ±1MB on desktop and ±12 Mb on mobile because it leverages the OS's native webview. This is in contrast with Electron, for example, which ships an entire copy of a browser (and node) with each program.

How does it fit into the runtime ecosystem?

Browser Runtimes Server Runtimes App Runtimes
Safari Bun Socket
FireFox Deno Tauri
Chrome Node.js Electron

How is Socket different from other hybrid-native runtimes, such as Electron, Tauri, NativeScript, React Native, Ionic, etc?

  • Socket is for Web developers, there is no new language to learn.

  • Socket is the first and only cross-platform runtime built from the ground up for desktop and mobile.

  • We embrace web standards instead of inventing new paradigms.

  • P2P and local-first are first-class considerations. We provide JavaScript APIs for Bluetooth, UDP, and robust file system IO. These make it possible to create an entirely new class of apps that are autonomous from the cloud and allow users to communicate directly without any infrastructure requirements.

Why should I care about P2P?

P2P features allow a developer to create apps where users can communicate directly, without the Cloud. It doesn’t require any servers at all, and even works when users are offline. These features are optional, they are NOT turned on by default and won't interrupt or conflict with your existing architecture or services.

Do Socket apps have total access to my computer, like Electron or Tauri?

Both Electron and Tauri's approach is to put all "business" logic into the "main" process. The reason for this is 1. to avoid degrading the performance of the UI (front-end) process, and 2. as a potential security measure.

However a "main" process is impossible to mitigate on desktop because there is no sandboxing model like there is on mobile, so advanced security procedures between the main and render process becomes aren't convincing. It's really up to the app creator to appeal to the user's sense of trust.

Socket Runtime takes a completely different approach. While we allow a "main" process, it's completely optional, and not considered the best practice. If you are shipping highly sensitive IP you may choose to put it here. If you have to compute intensive code, you can also put it here. But ideally, you put it into a worker thread.

Socket Apps can be written entirely in JavaScript, CSS, and HTML. The UI process and can be made secure via the CSP (a web standard for white-listing resource access).

Invocation of filesystem, bluetooth, network, etc. is all made over IPC calls that use a URI scheme (ipc://...), because of this, it works perfectly with CSP (a well-established web standard).

Any curious user can run a command like strings foo.app | grep ipc:// on a socket app bundle and examine the CSP of the index file.

Can Socket Apps run and be compiled headlessly?

Yes. This makes it great for creating Web developer tooling since it has a native DOM and all the browser APIs built in.

How can I trust what Socket is doing with my applications?

Socket is open-source. We would love for you to read all our code and see how we're doing things! Feel free to contact us as well and we can walk you through it.

Is Socket a Service?

Socket is NOT a cloud service. We do not have a SaaS offering. And there is no part of this that is hosted in the cloud.

There is a complementary application performance management product (APM), Socket Operator, that can diagnose and remediate issues within the production apps you build. This is also not a service, it's software.

But you're also a business, so you have to have some private technologies that you charge for, to make money?

As stated above, Socket Supply Co. builds and maintains a free and open source Runtime that helps web developers build apps for any OS, desktop, and mobile, as well as a p2p library, that enables developers to create apps where users can communicate directly, without the Cloud.

These will always be open-source and free to use by any developer, no matter what they use them for (commercial or personal). That will always be true.

Our Operator App has different tools which help in the entire lifecycle of building, deploying, and monitoring the Socket apps you build. Operator App has various pricing tiers which hackers, startups, and enterprises can benefit from.

We already have teams of engineers that build our web and other native-platform app experiences. Why would we benefit from Socket?

App builders can prioritize what they want to solve when working with Socket. There are many benefits to choose from for a wide variety of reasons.

Cost reduction — For smaller teams who don’t have native teams in place, they can get to their customers quicker by writing once, and running anywhere. Cloud bills are the #1 cost for many organizations, building on Socket reduces that to $0, or as much as you want to migrate off the cloud. We say crawl, walk, run.

Autonomy — Right now you’re entirely codependent on a 3rd party to run a mission-critical part of your business. The Cloud is a landlord-tenant relationship with costs that can prevent your business from becoming profitable. Socket helps you connect your users directly to each other, allowing you to rely less on the Cloud, and reclaim your sovereignty, and your profit margins.

Complexity — Companies whose applications are built across desktop and mobile would be moving from working and maintaining >= 3 code bases in their current state to 1 code base with Socket. This drastically reduces complexity within the organization and speeds up feature releases.

Builders of network-enabled Productivity and Collaboration tools will realize major benefits by building on Socket. Evan Wallace, Co-founder of Figma said it best "these days it’s obvious that multiplayer is the way all productivity tools on the web should work, not just design."

If we define "web 3" to mean a decentralized web, then yes. We don’t really take a position on anything else. We provide a technical foundation that makes it possible for many Web3 ideals to come to fruition.

In its current state, Web3 is not decentralized. The ecosystem relies heavily on centralized cloud providers like AWS for infrastructure. This is an economic disadvantage and in most cases a barrier to entry. However, apps built with Socket’s P2P capabilities can be 100% decentralized, and absolutely no servers are required. They can be fully autonomous, aligning directly with the mission of the web3 community.

Does P2P (without servers) mean that it only works if peers are online?

No! Socket's P2P protocol is designed for building disruption tolerant networks. It achieves long-lived partition tolerance through bounded replication of packets (limiting the number of hops and TTL of each packet that is relayed between peers in the network). Socket's P2P protocol builds on a corpus of existing academia. Please see the docs for more in-depth details.

If I send information to my friend or coworker, any other connected peer devices will see this message as they relay it on?

Peers do all relay packets for each other, to ensure that any peer can communicate with any other peer, even if they aren't directly connected or ever online with each other at the same time.

However, all data packets (those used for user data, not network coordination) are encrypted, such that only the intended recipient of the packets can decrypt and access the information therein.

So your message will reside in parts (packet by packet) on many other users' devices, at various times, but only in parts and only encrypted, meaning those other devices cannot make any sense of that data.

This encryption/decryption security uses industry-standard -- and is audited! -- public key cryptography, similar to --- and at least as safe as! -- the HTTPS/TLS encryption that users across the web trust for communication with very sensitive sources, including banks, doctors, etc.

How do I know that a message I receive (and decrypt) was not tampered with or faked by someone other than who the message claims to be from?

At the network packet level, packets are encrypted using the public key of the intended recipient. Only the recipient (holding the paired private key) could possibly decrypt the packet, which would be necessary for tampering.

Any man-in-the-middle tampering with an encrypted packet would render the final decrypted value as garbage. The app would be able to immediately tell that the expected data was garbled and thus discard it.

Corrupted (or manipulated) packets, or even dropped/missing packets, can be automatically re-queried across the peer network, to reacquire the necessary packets. As such, the encryption used guarantees that information received is either complete and intact, before decryption, or entirely dropped.

As for determining the identity authenticity of the sender, the network protocol does not employ overhead of digital signatures or verification, nor digital certificates.

Socket apps are allowed, and expected, to employ their own security layered on top of (tunneled through) the network encryption provided automatically. This may include additional encryption, digital signatures, digital certificates (identity verification), and more, according to the needs and capabilities of the app.

All of those app-specific techniques are still leveraged and negotiated across the Socket's peer network.

Your device never holds plain-text (or plainly accessible) data on behalf of any other user. The packets your device relays on behalf of others were encrypted for those intended recipients, and your device could never possibly decrypt or make sense of any of that data.

You thus have perfect deniability as your protection from those potential risks and liabilities.

This is analogous to internet infrastructure like switches/routers, which are peppered by the billions around the web. None of these devices can decrypt the HTTPS traffic transiting through them, and thus none of those devices ever have any liability for the kinds of information buried inside the encrypted data as it flows through.

Socket isn't introducing anything more dangerous here than has already existed for the last 25+ years of the internet.

More importantly, the relay of packets through your device only happens in memory (never on disk), and only while you have a Socket powered app open for use. If you close the app, or power-off / restart your device, that cache is wiped completely; the in-memory cache only gets filled back up with more packets when you open a Socket powered app while online.

As the device user, it's always your choice and in your control.

No!

The P2P relaying of packets is merely a pass-thru of (encrypted) data. Your device performs almost no computation on these packets, other than to check the plaintext headers to figure out whether and how to relay it along.

Aside from this very simple and fast processing of these packets, your device will never perform any computation on behalf of any other person.

The only exception would be computation you had directly and expressly consented to via an app that you chose to install and open/use, if that app was designed in such a way to share computation work with others.

For example, "SETI@home" type apps intentionally distribute computation (image processing, etc) among a vast array of devices that have idle/unused computing power being donated to a good cause. Another plausible example: some apps are currently exploring distributing machine-learning (ML/AI) computations among an array of peers.

If you installed such an app, and opened it, your device would subject itself to app-level computation on behalf of others. But you remain in control of all those decisions, including closing such apps, uninstalling them, etc. And if you didn't install and open such an app, none of that distributed computation would ever happen on your device, regardless of how others use the P2P network.

No unintended/background/abusive computation on your device is ever be possible by virtue of the Socket P2P protocol itself. Only apps themselves can coordinate such distributed computation activities, and only with expressed installation consent from users.

Aside from CPU computation, doesn't allow my device to participate in packet relay for many other peers subject my device to extra resource utilization (using up my memory, draining my battery more quickly, etc)?

The only resource utilization that occurs is that which you consent to by opening and using Socket apps.

Socket limits the memory used for the packet relay cache, currently to 16MB (not GB!). This is an extremely small slice of typical device memory, even budget-level smartphones (which typically have at least 1-2 GB of memory).

As for the battery, Socket does not perform unnecessary background work, so any battery usage you experience should be directly proportional to the active use of a Socket powered app.

Relaying packets is a very simple and resource-light type of task. In our testing, we haven't seen any noticeable increase in resource load on devices as a result of running a Socket powered app, compared to any other consumer apps users typically use.

As a matter of fact, Socket powered apps tend to use and transmit way less data than other commercial/consumer apps, so users can expect in general to see no worse -- and often much improved! -- resource utilization than for non-Socket apps.

Does P2P packet relay mean that data transmission, such as me sending a text message or picture to a friend, will go much slower?

P2P packet relay, even across a broad network of many millions of devices, is surprisingly fast and efficient, compared to typical presumptions.

If the sender and receiver of a message are both online at the time of a message being sent and are at most a few hops away in terms of the packet relay protocol of Socket, this transmission should take no more than a few hundred milliseconds at most.

In fact, since this communication is much more direct than in typical infrastructure, where messages have to go all the way out to a cloud server, and then on to the recipient, it's quite likely that communications will be at least as fast, if not much faster, via P2P communications techniques (relay, etc) as described.

If the recipient of my message is not online when I send it, how long will the packets stay alive in the P2P network before being dropped, if the recipient has not yet come online and received the packets?

There's a lot of it depends on this answer (including the size of the message, how many packets, and network activity/congestion). But in general, messages may be able to survive for as long as a couple of weeks and almost never less than several days.

Apps are expected to design with the nature of the lack of delivery guarantees in P2P networks in focus. To help users compensate and adapt, these apps should provide appropriate user experience affordances, including "resend", "read receipt", and other such capabilities.

I've heard that P2P is too hard and doesn't work because NAT traversal is hard.

This is a hard problem. And until now there hasn't been a comprehensive solution for Web Developers.

We are able to reliably connect all kinds of NAT's. For hard-to-hard NATs, we rely on other features of our protocol.

NAT traversal and negotiation are automatically handled, so that app developers do not need to worry about these messy details. That said, all our code is open- source, so we invite you to take a deeper look if you're curious about how we handle these complicated tasks on your app's behalf. Our work builds on a corpus of peer reviewed academia, primarily this paper.

Bad actors are certainly going to try to flood the network with junk, to deny/degrade service (DoS attacks), attack peers (DDoS attacks), etc. How can this P2P network possibly survive such abuse?

The P2P packet relay protocol includes a sophisticated set of balancing techniques, which acts to ensure that no peer on the network places an outsized burden on other peers in the network.

Fluctuations and usage differences of course are a reality, but the protocol naturally resists the kinds of behaviors that bad actors rely on.

We've done a significant amount of modeling simulations and real-world field tests, and we're convinced that these types of attacks will ultimately prove impractical and not affect the ultimate trajectory and growth of our P2P network.

Is this like BitTorrent, Tor, Napster, Gnutella, etc?

The web's roots are P2P, and yes there have been a number of widely known (and sometimes infamous!) attempts to bring the web back to its P2P identity over the years; some good, some not so good. Most of these are focused on file sharing. We see a broader opportunity with P2P which is focused on connectivity, reduced infrastructure cost, and reduced complexity in general.

We think the time has come for the web to return to the P2P model by default, to dismantle the wasteful and unnecessarily complicated (and expensive!) centralization trend that has given rise to the age of the "cloud". There are more than enough consumer devices, many of them highly connected, to accomplish a de-centralization.

While these changes have profound effects on improving how developers and businesses build and deliver experiences to consumers, it's the revolution of a user-centric web that most excites us.

Users don't need all of their data sent up to the cloud, nor do they want that. Users want privacy by default. Users don't need or want to be tracked with every single click or keystroke. Users don't want to wait, staring at spinners, while entire applications full of tens of megabytes of images, fonts, and JS code re-download every single page load. Users don't want or need walled-garden app stores to single-handedly decide what apps they're allowed to access, or how they're allowed to communicate and collaborate using those apps. Users don't want experiences that only work if they have a perfect internet connection, and die or are unavailable when wifi gets spotty.

All of these are hallmarks of the web as it is today, and all of these are tricks designed to work in favor of big centralized companies that slurp up all our data and then charge us rent to hold it. All of these are user-hostile behaviors that for the most part users can't opt out of, but overwhelmingly don't actually want.

Socket is a foundational building block that we believe can help usher in a new age of the web, one that puts users first. One that blurs the lines between websites and apps, and puts all those amazing experiences right on users' devices for them to use instantly, no matter where they are or what kind of internet connection they have (or not!). One that defaults to a local-first (or even local-only!) model that protects users' information by default.

Putting developers in control, and moreover putting users in control, isn't a fad or a phase. We think it's exactly where the web has to go to survive, and we believe it's where everyone that builds for the web will shift to eventually. Those are admittedly pretty big aspirations and goals, but they're far from unrealistic or naive.