The Socket Runtime is pre-release, please track and discuss issues on GitHub.

Socket Runtime

Write once. Run anywhere. Connect everyone.

Socket Runtime is like a next-generation Electron. Build native apps for any OS using HTML, CSS, and JavaScript, for desktop & mobile! You can also connect your users, and let them communicate directly, without the cloud or any servers!

Features

Local First

A full-featured File System API & Bluetooth ensure it's possible to create excellent offline and local-first user experiences.

P2P & Cloud

Built to support a new generation of apps that can connect directly to each other by providing a high-performance UDP API.

Use any backend

Business logic can be written in any language, Python, Rust, Node.js, etc. The backend is even completely optional.

Use any frontend

All the standard browser APIs are supported, so you can use your favorite front-end framework to create your UIs, React, Svelte, Vue for example.

Maintainable

Socket itself is built to be maintainable. Zero dependencies and a smaller code base than any other competing project.

Lean & Fast

Uses a smaller memory footprint and creates smaller binaries than any other competing project.

Getting Started

Install

From Package Manager Or From Source

The easiest way to install Socket Runtime is by using a package manager like npm. If you don't have npm installed but you want to use it, you can download it here.

curl iwr npm pnpm npm i @socketsupply/socket -g

Let your package manager handle all the details (any OS).

pnpm add -g @socketsupply/socket

Let your package manager handle all the details (any OS).

curl -s -o- https://sockets.sh/sh | bash -s

Install by compiling from source (MacOS or Linux).

iwr -useb https://sockets.sh/ps | iex

Install by compiling from source (Windows).

From Create Socket App

Installing Socket Runtime from Create Socket App will be instantly familiar to anyone who has used React's Create React App.

The idea is to provide a few basic boilerplates and some strong opinions so you can get coding on a production-quality app as quickly as possible. Create an empty directory and try one of the following commands:

npx npm yarn pnpm npx create-socket-app [react | svelte | tonic | vanilla | vue] npm create socket-app [react | svelte | tonic | vanilla | vue] yarn create socket-app [react | svelte | tonic | vanilla | vue] pnpm create socket-app [react | svelte | tonic | vanilla | vue]

After running the command you'll see a directory structure like this...

.
├── README.md
├── build.js
├── package.json
├── socket.ini
├── src
│   ├── icon.png
│   ├── index.css
│   ├── index.html
│   └── index.js
└── test
    ├── index.js
    └── test-context.js

Anatomy of a Socket App

Android
iOS
Windows
Linux
MacOS
Socket Runtime
JS
HTML
CSS
Sub Process
Mobile or Desktop UI Hello, World

Some apps do computationally intensive work and may want to move that logic into a sub-process. That sub-process will be piped to the render process, so it can be any language.

This is what you see on your screen when you open an app either on your phone or your desktop.

The Socket CLI tool builds and packages and manages your application's assets. The runtime abstracts the details of the operating system so you can focus on building your app.

This is plain old JavaScript that is loaded by the HTML file. It may be bundled. It runs in a browser-like environment with all the standard browser APIs.

This is plain old CSS that is loaded by the HTML file.

This is plain old HTML that is loaded by the Socket Runtime.

JavaScript HTML CSS Sub Process
import fs from 'socket:fs/promisies'

window.addEventListener('DOMContentLoaded', async () => {
  console.log(await fs.readFile('index.html'))
})
<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1" />
    <link rel="stylesheet" href="./index.css" />
  </head>
  <body>
    <h1>Hello, World</h1>
    <script type="module" src="./index.js"></script>
  </body>
</html>
h1 {
  text-transform: uppercase;
}
// This can be any program that can reads stdin and writes to stdout.
// Unlike Electron or other frameworks, this is completely optional.
// This is an example of using a javascript runtime as a sub process.

import { Message } from '@socketsupply/socket-api/ipc.js'
import pipe from '@socketsupply/node-pipe'

pipe.on('data', data => {
  pipe.write(data)
})

pipe.write(Message.from('setTitle', { value: 'hello' }))

Next Steps

The same codebase will run on mobile and desktop, but there are some features unique to both. Ready to dive a bit deeper?

Mobile → Desktop →

Apple Guide

The following is a guide for building apps on Apple's MacOS and iOS operating systems.

Prerequisites

  1. Sign up for a (free) Apple Developer account.
  2. Register your devices for testing. You can use ssc list-devices command to get your Device ID (UDID). The device should be connected to your mac by wire.
  3. Create a wildcard App ID for the application you are developing.
  4. Write down your Team ID. It's in the top right corner of the website. You'll need this later.

MacOS

  • Xcode Command Line Tools. If you don't have them already, and you don't have Xcode, you can run the command xcode-select --install.

iOS

Code Signing Certificates

  • Open the Keychain Access application on your mac (it's in Applications/Utilities).
  • In the Keychain Access application choose Keychain Access -> Certificate Assistant -> Request a Certificate From a Certificate Authority...
  • Type your email in the User Email Address field. Other form elements are optional.
  • Choose Request is Saved to Disc and save your certificate request.

MacOS

Signing software on MacOS is optional but it's the best practice. Not signing software is like using http instead of https.

  • Create a new Developer ID Application certificate on the Apple Developers website.
  • Choose a certificate request you created 2 steps earlier.
  • Download your certificate and double-click to add it to your Keychain.

iOS

To run software on iOS, it must be signed by Xcode using the certificate data contained in a "Provisioning Profile". This is a file generated by Apple and it links app identity, certificates (used for code signing), app permissions, and physical devices.

  • Create a new iOS Distribution (App Store and Ad Hoc) certificate on the Apple Developers website.
  • Choose a certificate request you created 2 steps earlier.
  • Download your certificate and double-click to add it to your Keychain.

When you run ssc build --target=ios . on your project for the first time, you may see the following because you don't have a provisioning profile:

ssc build --target=ios .
• provisioning profile not found: /Users/chicoxyzzy/dev/socketsupply/birp/./distribution.mobileprovision. Please specify a valid provisioning profile in the ios_provisioning_profile field in your `ssc.config`
  • Create a new Ad Hoc profile. Use the App ID you created with the wildcard.
  • Pick the certificate that you added to your Keychain two steps earlier.
  • Add the devices that the profile will use.
  • Add a name for your new distribution profile (we recommend naming it "distribution").
  • Download the profile and double-click it. This action will open Xcode. You can close it after it's completely loaded.
  • Place your profile in your project directory (same directory as ssc.config). The profiles are secret, add your profile to .gitignore.

Configuration

MacOS

You will want to ensure the following fields are filled out in your ssc.config file. They will look something like this...

mac_team_id: Z3M838H537
mac_sign: Developer ID Application: Operator Tools Inc. (Z3M838H537)

iOS

  1. Set the ios_distribution_method value in ssc.config to the ad-hoc
  2. Set the ios_codesign_identity value in ssc.config to the certificate name as it's displayed in the Keychain or copy it from the output of security find-identity -v -p codesigning
  3. Set the ios_provisioning_profile value in ssc.config to the filename of your certificate (i.e., "distribution.mobileprovision").

Development

Create a simulator VM and launch the app in it

ssc build --target=iossimulator -r .

Distribution And Deployment

ssc build --target=ios -c -p -xd .

To your device

Install Apple Configurator, open it, and install Automation Tools from the menu.

Connect your device and run ssc install-app <path> where the path is the root directory of your application (the one where ssc.config is located).

An alternative way to install your app is to open the Apple Configurator app and drag the inner /dist/build/[your app name].ipa/[your app name].ipa file onto your phone.

To the Apple App Store

xcrun altool --validate-app \
  -f file \
  -t platform \
  -u username [-p password] \
  [--output-format xml]
xcrun altool --upload-app \
  -f file \
  -t platform \
  -u username [-p password] \
  [—output-format xml]

Debugging

Check the troubleshooting guide first. You can also run lldb and attach to a process, for example...

process attach --name TestExample-dev

Logging

To see logs on either platform, open Console.app (installed on MacOS by default) and in the right-side panel pick the device or computer name.

Working with the file system on iOS

iOS Application Sandboxing has a set of rules that limits access to the file system. Apps can only access files in their own sandboxed home directory.

Directory Description
Documents The app’s sandboxed documents directory. The contents of this directory are backed up by iTunes and may be set as accessible to the user via iTunes when UIFileSharingEnabled is set to true in the application's info.plist.
Library The app’s sandboxed library directory. The contents of this directory are synchronized via iTunes (except the Library/Caches subdirectory, see below), but never exposed to the user.
Library/Caches The app’s sandboxed caches directory. The contents of this directory are not synchronized via iTunes and may be deleted by the system at any time. It's a good place to store data which provides a good offline-first experience for the user.
Library/Preferences The app’s sandboxed preferences directory. The contents of this directory are synchronized via iTunes. Its purpose is to be used by the Settings app. Avoid creating your own files in this directory.
tmp The app’s sandboxed temporary directory. The contents of this directory are not synchronized via iTunes and may be deleted by the system at any time. Although, it's recommended that you delete data that is not necessary anymore manually to minimize the space your app takes up on the file system. Use this directory to store data that is only useful during the app runtime.

Desktop Guide

Getting Started

Open a terminal and navigate to where you keep your code. Create a directory and initialize it.

ssc init

Mobile Guide

Requirements for Android

Requirements for iPhone

Develop & Debug Cycle

You'll want to write code, see it, change it, and repeat this cycle. So the typical approach is to create a watch script that rebuilds your files when there are changes. If you provide a port, the ssc command will try to load http://localhost.

ssc build -r --port=8000

You'll need to tell your build script the output location. The ssc command can tell you the platform-specific build destination. For example.

./myscript `ssc list-build-target .`

Running the Mobile Simulator

After you get your UI looking how you want. The next step is to try it out on the simulator. At this point, we can use either the -ios or -android flags as well as the -simulator flag. This will create a platform-specific bundle, create and boot a simulator VM and then run your app in the simulator if -r flag is provided.

ssc build --target=iossimulator -r

Debugging in the Mobile Simulator

You can use Safari to attach the Web Inspector to the Simulator. In the Safari menu, navigate to Develop -> Simulator -> index.html. This will be the exact same inspector you get while developing desktop apps.

Next Steps

The JavaScript APIs are the same on iOS and Android, check out the API docs.

Peer To Peer

Why use P2P?

There are 3 areas P2P is superior to client-server.

Complexity

One server to many users is a natural bottle-neck, and scaling them up quickly becomes a complex distributed system of shared state. A P2P network is many users to many users, and although it is also an eventually consistent, distributed system of shared state, the total complexity of all components needed to create a P2P network is finite, transparent, and can be formally verified.

Security & Sovereignty

Cloud (even Gov. Cloud) are closed systems, owned by third parties. There is always a non-zero chance for a greater number of incidents than if the network, data, and software was entirely self-contained without a man-in-the-middle.

Cost

Servers, and cloud infrastructure in general, become more expensive as demand increases. Although "free tiers" can be cheap or trivial, they can be prohibitive for many kinds of businesses.

Use Cases

Instead of implementing server infrastructure, a front-end developer can connect users directly or indirectly using simple JavaScript APIs.

A chat app like Telegram, a social media app like Twitter, or a collaborative content creation app like Figma are all things that could be created on top of Socket Runtime without needing server infrastructure.

How does it Work?

The protocol Socket Runtime uses is called "Stream Relay". Its purpose is to deliver network packets. That necessitates connecting peers, so it implements comprehensive NAT traversal. In order to achieve the highest possible delivery rate, it implements a strategy for (long and short lived) partition tolerance.

It sits directly on top of UDP and is suitable for building applications and protocols. Uses case examples include chat, social networks, mail, photo and file sharing, distributed computing.

The dependency for this api will be published 2023.03.31

import { Peer } from 'socket:peer'

const peer = new Peer({ publicKey, privateKey })

peer.onPacket = (packet, port, address) => {
  console.log(packet)
}

peer.init()

peer.publish({
  clusterId: 'greetings',
  message: 'hello, world'
})

TODO link to the API docs and explain this code more.

NAT Traversal

Problem Description

With client-server architecture, any client can directly request a response from a server. However, in peer-to-peer architecture, any client can not request a response from any other client.

P2P software needs to listen to new messages from unknown people. However, most routers will discard unsolicited packets. It is possible to work around this problem.

P2P connectivity requires coordination between three or more peers. Consider the following analogy.

Alice wants to write a letter to Bob. But Bob moves frequently, and she has no other way to contact him. Before Alice can send Bob the letter, she needs to learn his address. The solution is for her to ask some friends who might have talked to Bob recently.

In this analogy, Alice's letter is really a packet of data. Bob's address is his external IP address and port. And their friends, are a serializable list of recently known IP addresses and ports. You can read more about the technical details in the STATE_0 section of the spec.

Small Networks

Problem Description

Imagine a scenario where Alice and Bob are sending each other messages. Alice goes offline. Bob continues to write messages to Alice. And because the App Bob is using has a good "local first" user experience, it appears to Bob as if everything is fine and that Alice should eventually receive all his messages, so he also goes offline. When Alice comes online, she doesn’t see Bob’s most recent messages because he’s not online to share them.

Problem Summary & Solution Deficits

This is a very common problem with P2P, it’s sometimes called the Small Network Problem. How well data can survive in a network with this problem is referred to as "partition tolerance". This problem is often solved by using STUN and TURN (relay servers), but these add additional infrastructure that comes at a cost (both in terms of time, money, and expertise).

Solution

The way Stream Relay Protocol solves this problem is with a shared, global packet caching strategy. Every peer in the entire network allocates a small budget (16Mb by default) for caching packets from any other peer in the network.

A peer will cache random packets with a preference for packets that have a topic ID that the peer subscribes to. A packet starts out with a postage value of 16 (A peer will never accept a packet with a postage value greater than 16). When a packet nears its max-age, it is re-broadcast to 3 more random peers in the network, each taxing its postage by a value of 1 when received, unless its postage value is 0, in which case it is no longer re-broadcast and is purged from the peer’s cache.

When a message is published, it is also re-broadcast by at least 3 random peers, with a preference for the intended recipient and peers that subscribe to the same topic. The result of this is a high r-value (or r0, also known as Basic Reproduction Ratio in epidemiology).

Solution Summary & Solution Gains

In simulations, a network of 128 peers, where the join and drop rate are >95%, +/-30% of NATs are hard, and only +/-50% of the peers subscribe to the topic ID; an unsolicited packet can replicate to 100% of the subscribers in an average of 2.5 seconds, degrading to only 50% after a 1000% churn (unrealistically hostile network conditions) over a >1 minute period.

This strategy also improves as the number of participants in the network increases, since the size of the cache and time packets need to live in it is reduced. If we revisit our original problem with this strategy, we can demonstrate a high degree of partition tolerance. It can be said that the peers in a network act as relays (hence the name of the protocol).

Bob continues to write (optionally encrypted) messages to Alice after she goes offline, and his packets are published to the network. A network of only 1024 peers (split across multiple apps), will provide Bob’s packets a 100% survival rate over a 24-hour period, without burdening any particular peer with storage or compute requirements. Allowing Alice to return to the network and see Bob’s most recent messages without the need for additional infrastructure.

Cache Negotiation

TODO

Connectionless

TCP is often thought of as an ideal choice for packet delivery since it's considered "reliable". With TCP packet loss, all packets are withheld until all packets are received, this can be a delay of up to 1s (as per RFC6298 section 2.4). If the packet can't be retransmitted, an exponential backoff could lead to another 600ms of delay needed for retransmission.

In fact, Head-of-Line Blocking is generally a problem with any ordered stream, TCP (or UDP with additional higher-level protocol code for solving this problem).

TCP introduces other unwanted complexity that makes it less ideal for P2P.

UDP is only considered "unreliable" in the way that packets are not guaranteed to be delivered. However, UDP is ideal for P2P networks because it’s message oriented and connectionless (ideal for NAT traversal). Also because of its message-oriented nature, it’s light-weight in terms of resource allocation. It's the responsibility of a higher-level protocol to implement a strategy for ensuring UDP packets are delivered.

Stream Relay Protocol eliminates Head-of-Line blocking entirely by reframing packets as content-addressable Doubly-Linked lists, allowing packets to be delivered out of order and become eventually consistent. Causal ordering is made possible by traversing the previous ID or next ID to determine if there were packets that came before or after one that is known.

And in the case where there is loss (or simply missing data), the receiver MAY decide to request the packet. If the peer becomes unavailable, query the network for the missing packet.

The trade-off is more data is required to re-frame the packet. The average MTU for a UDP packet is ~1500 bytes. Stream Relay Protocol uses ~134 bytes for framing, allocating 1024 bytes of application or protocol data, which is more than enough.

Further Reading

See the specification and the source code for more details.

Troubleshooting

macOS

Cashes

To produce a meaningful backtrace that can help debug the crash, you'll need to resign the binary with the ability to attach the lldb debugger tool. You'll also want to enable core dumps in case the analysis isn't exhaustive enough.

sudo ulimit -c unlimited # enable core dumps (`ls -la /cores`)
/usr/libexec/PlistBuddy -c "Add :com.apple.security.get-task-allow bool true" tmp.entitlements
codesign -s - -f --entitlements tmp.entitlements ./path/to/your/binary
lldb ./path/to/your/binary # type `r`, then after the crash type `bt`

Clock Drift

If you're running from a VM inside MacOS you may experience clock drift and the signing tool will refuse to sign. You can set sntp to refresh more frequently with the following command...

sudo sntp -sS time.apple.com

macOS asks for a password multiple times on code signing

Open Keychain Access and find your developer certificate under the My Certificates section. Expand your certificate and double-click on a private key. In the dialog click the Access Control tab.

codesign utility is located in the /usr/bin/codesign. To add it to the allowed applications list click the "+" button to open File Dialog, then press ⌘ + Shift + G and enter /usr/bin. Select codesign utility fom Finder.

Build or compile failures

aclocal / automake: command not found

To build ssc for ios you need automake / libtool installed.

brew install automake
brew install libtool

unable to build chain to self-signed root for signer (...)

You need the intermediate certificate that matches your code signing certificate. To find which "Worldwide Developer Relations" matches your certificate, open the signing certificate in your keychain, open this page, and find the certificate that matches the details in the "Issuer" section of your certicicate.

xcrun: error: SDK "iphoneos" cannot be located

You have to configure the Xcode command line tools, to do this you can run the following command

sudo xcode-select --switch /Applications/Xcode.app

fatal error: 'lib/uv/include/uv.h' file not found

Make sure your local ssc binary has been compiled with ios parameter in ./bin/install.sh ios, otherwise the uv.h does not exist.

unable to find utility simctl

You need to have XCode installed on your MacBook.

You have not agreed to the Xcode license agreements, please run 'sudo xcodebuild -license' from within a Terminal window to review and agree to the Xcode license agreements.

You can run sudo xcodebuild -license to agree to the license.

Multiple Password Prompts

If macOS is asking you for a password every time you run the command with -c flag, follow these instructions

Application crashes on start

If you use iTerm2 you can get your app crash with

This app has crashed because it attempted to access privacy-sensitive data without a usage description. The app's Info.plist must contain an NSBluetoothAlwaysUsageDescription key with a string value explaining to the user how the app uses this data.

Command line apps inherit their permissions from iTerm, so you need to grant Bluetooth permission to iTerm in macOS system preferences. Go to Security & Privacy, open the Privacy tab, and select Bluetooth. Press the "+" button and add iTerm to the apps list.

Windows

Development Environment

clang++ version 14 is required for building.

You will need build tools

The WebView2LoaderStatic.lib file was sourced from this package.

Cannot Run Scripts

If the app cannot be loaded because running scripts is disabled on this system.

./bin/bootstrap.ps1 : File C:\Users\user\sources\socket\bin\bootstrap.ps1 cannot be loaded because running scripts is
disabled on this system. For more information, see about_Execution_Policies at
https:/go.microsoft.com/fwlink/?LinkID=135170.

Then you can follow https://superuser.com/a/106363

  1. Start Windows PowerShell with the "Run as Administrator" option.
  2. set-executionpolicy remotesigned

MSVC

Setting up the MSVC build environment from Git Bash

You can leverage the MSVC build tools (clang++) and environment headers directly in Git Bash by loading them into your shell environment directly. This is possible by running the following command:

source bin/mscv-bash-env.sh

The bin/install.sh shell script should work for compiling the ssc tool. It is also recommended to initialize this environment when building applications with ssc from the CLI so the correct build tools can be used which ensures header and library paths for the compiler

Linux

Build failures

If you are getting a failure that the build tool cant locate your compiler, try making sure that the CXX environment variable is set to the location of your C++ compiler (which g++, or which c++).

The latest version of MacOS should have installed C++ for you. But on Linux you may need to update some packages. To ensure you have the latest clang compiler and libraries you can try the following...

For debian/ubuntu, before you install the packages, you may want to add these software update repos here to the software updater.

Note that clang version 14 is only available on Ubuntu 22.04. Use clang 13 for prior versions of Ubuntu.

Ubuntu

sudo apt install \
  build-essential \
  clang-14 \
  libc++1-14-dev \
  libc++abi-14-dev \
  libwebkit2gtk-4.1-dev

Arch/Manjaro

arch uses the latest versions, so just install base-devel

sudo pacman -S base-devel

Multiple g++ versions

If you've tried running the above apt install and you get an error related to Unable to locate package then you can also install multiple versions of G++ on your system.

sudo apt install software-properties-common
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt install gcc-9 g++-9 gcc-10 g++-10 gcc-11 g++-11 gcc-12 g++-12

Then you can set your C++ compiler as g++-12

# Add this to bashrc
export CXX=g++-12

Can't find Webkit

If you run into an error about not finding webkit & gtk like this:

Package webkit2gtk-4.1 was not found in the pkg-config search path.
Perhaps you should add the directory containing `webkit2gtk-4.1.pc'
to the PKG_CONFIG_PATH environment variable
No package 'webkit2gtk-4.1' found
In the file included from /home/runner/.config/socket/src/main.cc:9:
/home/runner/.config/socket/src/linux.hh:4:10: fatal error: JavaScriptCore/JavaScript.h: No such file or directory
    4 | #include <JavaScriptCore/JavaScript.h>
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.

Then you will want to install those dependencies

sudo apt-get install libwebkit2gtk-4.1-dev

FAQ

What is Socket Runtime?

Socket Runtime is a free and open-source client-side Runtime that helps web developers build apps for any OS, desktop, and mobile. You can use plain old HTML, CSS, and JavaScript, as well as your favorite front-end libraries for example React, Svelte, and Vue.

Socket ships binaries that start at ±1MB on desktop and ±12 Mb on mobile because it leverages the OS's native webview. This is in contrast with Electron, for example, which ships an entire copy of a browser (and node) with each program.

How does it fit into the runtime ecosystem?

Browser Runtimes Server Runtimes App Runtimes
Safari Bun Socket
FireFox Deno Tauri
Chrome Node.js Electron

How is Socket different from other hybrid-native runtimes, such as Electron, Tauri, NativeScript, React Native, Ionic, etc?

  • Socket is for Web developers, there is no new language to learn.

  • Socket is the first and only cross-platform runtime built from the ground up for desktop and mobile.

  • We embrace web standards instead of inventing new paradigms.

  • P2P and local-first are first-class considerations. We provide JavaScript APIs for Bluetooth, UDP, and robust file system IO. These make it possible to create an entirely new class of apps that are autonomous from the cloud and allow users to communicate directly without any infrastructure requirements.

Why should I care about P2P?

P2P features allow a developer to create apps where users can communicate directly, without the Cloud. It doesn’t require any servers at all, and even works when users are offline. These features are optional, they are NOT turned on by default and won't interrupt or conflict with your existing architecture or services.

Do Socket apps have total access to my computer, like Electron or Tauri?

Both Electron and Tauri's approach is to put all "business" logic into the "main" process. The reason for this is 1. to avoid degrading the performance of the UI (front-end) process, and 2. as a potential security measure.

However a "main" process is impossible to mitigate on desktop because there is no sandboxing model like there is on mobile, so advanced security procedures between the main and render process becomes aren't convincing. It's really up to the app creator to appeal to the user's sense of trust.

Socket Runtime takes a completely different approach. While we allow a "main" process, it's completely optional, and not considered the best practice. If you are shipping highly sensitive IP you may choose to put it here. If you have to compute intensive code, you can also put it here. But ideally, you put it into a worker thread.

Socket Apps can be written entirely in JavaScript, CSS, and HTML. The UI process and can be made secure via the CSP (a web standard for white-listing resource access).

Invocation of filesystem, bluetooth, network, etc. is all made over IPC calls that use a URI scheme (ipc://...), because of this, it works perfectly with CSP (a well-established web standard).

Any curious user can run a command like strings foo.app | grep ipc:// on a socket app bundle and examine the CSP of the index file.

Can Socket Apps run and be compiled headlessly?

Yes. This makes it great for creating Web developer tooling since it has a native DOM and all the browser APIs built in.

How can I trust what Socket is doing with my applications?

Socket is open-source. We would love for you to read all our code and see how we're doing things! Feel free to contact us as well and we can walk you through it.

Is Socket a Service?

Socket is NOT a cloud service. We do not have a SaaS offering. And there is no part of this that is hosted in the cloud.

There is a complementary application performance management product (APM), Socket Operator, that can diagnose and remediate issues within the production apps you build. This is also not a service, it's software.

But you're also a business, so you have to have some private technologies that you charge for, to make money?

As stated above, Socket Supply Co. builds and maintains a free and open source Runtime that helps web developers build apps for any OS, desktop, and mobile, as well as a p2p library, that enables developers to create apps where users can communicate directly, without the Cloud.

These will always be open-source and free to use by any developer, no matter what they use them for (commercial or personal). That will always be true.

Our Operator App has different tools which help in the entire lifecycle of building, deploying, and monitoring the Socket apps you build. Operator App has various pricing tiers which hackers, startups, and enterprises can benefit from.

We already have teams of engineers that build our web and other native-platform app experiences. Why would we benefit from Socket?

App builders can prioritize what they want to solve when working with Socket. There are many benefits to choose from for a wide variety of reasons.

Cost reduction — For smaller teams who don’t have native teams in place, they can get to their customers quicker by writing once, and running anywhere. Cloud bills are the #1 cost for many organizations, building on Socket reduces that to $0, or as much as you want to migrate off the cloud. We say crawl, walk, run.

Autonomy — Right now you’re entirely codependent on a 3rd party to run a mission-critical part of your business. The Cloud is a landlord-tenant relationship with costs that can prevent your business from becoming profitable. Socket helps you connect your users directly to each other, allowing you to rely less on the Cloud, and reclaim your sovereignty, and your profit margins.

Complexity — Companies whose applications are built across desktop and mobile would be moving from working and maintaining >= 3 code bases in their current state to 1 code base with Socket. This drastically reduces complexity within the organization and speeds up feature releases.

Builders of network-enabled Productivity and Collaboration tools will realize major benefits by building on Socket. Evan Wallace, Co-founder of Figma said it best "these days it’s obvious that multiplayer is the way all productivity tools on the web should work, not just design."

If we define "web 3" to mean a decentralized web, then yes. We don’t really take a position on anything else. We provide a technical foundation that makes it possible for many Web3 ideals to come to fruition.

In its current state, Web3 is not decentralized. The ecosystem relies heavily on centralized cloud providers like AWS for infrastructure. This is an economic disadvantage and in most cases a barrier to entry. However, apps built with Socket’s P2P capabilities can be 100% decentralized, and absolutely no servers are required. They can be fully autonomous, aligning directly with the mission of the web3 community.

Does P2P (without servers) mean that it only works if peers are online?

No! Socket's P2P protocol is designed for building disruption tolerant networks. It achieves long-lived partition tolerance through bounded replication of packets (limiting the number of hops and TTL of each packet that is relayed between peers in the network). Socket's P2P protocol builds on a corpus of existing academia. Please see the docs for more in-depth details.

If I send information to my friend or coworker, any other connected peer devices will see this message as they relay it on?

Peers do all relay packets for each other, to ensure that any peer can communicate with any other peer, even if they aren't directly connected or ever online with each other at the same time.

However, all data packets (those used for user data, not network coordination) are encrypted, such that only the intended recipient of the packets can decrypt and access the information therein.

So your message will reside in parts (packet by packet) on many other users' devices, at various times, but only in parts and only encrypted, meaning those other devices cannot make any sense of that data.

This encryption/decryption security uses industry-standard -- and is audited! -- public key cryptography, similar to --- and at least as safe as! -- the HTTPS/TLS encryption that users across the web trust for communication with very sensitive sources, including banks, doctors, etc.

How do I know that a message I receive (and decrypt) was not tampered with or faked by someone other than who the message claims to be from?

At the network packet level, packets are encrypted using the public key of the intended recipient. Only the recipient (holding the paired private key) could possibly decrypt the packet, which would be necessary for tampering.

Any man-in-the-middle tampering with an encrypted packet would render the final decrypted value as garbage. The app would be able to immediately tell that the expected data was garbled and thus discard it.

Corrupted (or manipulated) packets, or even dropped/missing packets, can be automatically re-queried across the peer network, to reacquire the necessary packets. As such, the encryption used guarantees that information received is either complete and intact, before decryption, or entirely dropped.

As for determining the identity authenticity of the sender, the network protocol does not employ overhead of digital signatures or verification, nor digital certificates.

Socket apps are allowed, and expected, to employ their own security layered on top of (tunneled through) the network encryption provided automatically. This may include additional encryption, digital signatures, digital certificates (identity verification), and more, according to the needs and capabilities of the app.

All of those app-specific techniques are still leveraged and negotiated across the Socket's peer network.

Your device never holds plain-text (or plainly accessible) data on behalf of any other user. The packets your device relays on behalf of others were encrypted for those intended recipients, and your device could never possibly decrypt or make sense of any of that data.

You thus have perfect deniability as your protection from those potential risks and liabilities.

This is analogous to internet infrastructure like switches/routers, which are peppered by the billions around the web. None of these devices can decrypt the HTTPS traffic transiting through them, and thus none of those devices ever have any liability for the kinds of information buried inside the encrypted data as it flows through.

Socket isn't introducing anything more dangerous here than has already existed for the last 25+ years of the internet.

More importantly, the relay of packets through your device only happens in memory (never on disk), and only while you have a Socket powered app open for use. If you close the app, or power-off / restart your device, that cache is wiped completely; the in-memory cache only gets filled back up with more packets when you open a Socket powered app while online.

As the device user, it's always your choice and in your control.

No!

The P2P relaying of packets is merely a pass-thru of (encrypted) data. Your device performs almost no computation on these packets, other than to check the plaintext headers to figure out whether and how to relay it along.

Aside from this very simple and fast processing of these packets, your device will never perform any computation on behalf of any other person.

The only exception would be computation you had directly and expressly consented to via an app that you chose to install and open/use, if that app was designed in such a way to share computation work with others.

For example, "SETI@home" type apps intentionally distribute computation (image processing, etc) among a vast array of devices that have idle/unused computing power being donated to a good cause. Another plausible example: some apps are currently exploring distributing machine-learning (ML/AI) computations among an array of peers.

If you installed such an app, and opened it, your device would subject itself to app-level computation on behalf of others. But you remain in control of all those decisions, including closing such apps, uninstalling them, etc. And if you didn't install and open such an app, none of that distributed computation would ever happen on your device, regardless of how others use the P2P network.

No unintended/background/abusive computation on your device is ever be possible by virtue of the Socket P2P protocol itself. Only apps themselves can coordinate such distributed computation activities, and only with expressed installation consent from users.

Aside from CPU computation, doesn't allow my device to participate in packet relay for many other peers subject my device to extra resource utilization (using up my memory, draining my battery more quickly, etc)?

The only resource utilization that occurs is that which you consent to by opening and using Socket apps.

Socket limits the memory used for the packet relay cache, currently to 16MB (not GB!). This is an extremely small slice of typical device memory, even budget-level smartphones (which typically have at least 1-2 GB of memory).

As for the battery, Socket does not perform unnecessary background work, so any battery usage you experience should be directly proportional to the active use of a Socket powered app.

Relaying packets is a very simple and resource-light type of task. In our testing, we haven't seen any noticeable increase in resource load on devices as a result of running a Socket powered app, compared to any other consumer apps users typically use.

As a matter of fact, Socket powered apps tend to use and transmit way less data than other commercial/consumer apps, so users can expect in general to see no worse -- and often much improved! -- resource utilization than for non-Socket apps.

Does P2P packet relay mean that data transmission, such as me sending a text message or picture to a friend, will go much slower?

P2P packet relay, even across a broad network of many millions of devices, is surprisingly fast and efficient, compared to typical presumptions.

If the sender and receiver of a message are both online at the time of a message being sent and are at most a few hops away in terms of the packet relay protocol of Socket, this transmission should take no more than a few hundred milliseconds at most.

In fact, since this communication is much more direct than in typical infrastructure, where messages have to go all the way out to a cloud server, and then on to the recipient, it's quite likely that communications will be at least as fast, if not much faster, via P2P communications techniques (relay, etc) as described.

If the recipient of my message is not online when I send it, how long will the packets stay alive in the P2P network before being dropped, if the recipient has not yet come online and received the packets?

There's a lot of it depends on this answer (including the size of the message, how many packets, and network activity/congestion). But in general, messages may be able to survive for as long as a couple of weeks and almost never less than several days.

Apps are expected to design with the nature of the lack of delivery guarantees in P2P networks in focus. To help users compensate and adapt, these apps should provide appropriate user experience affordances, including "resend", "read receipt", and other such capabilities.

I've heard that P2P is too hard and doesn't work because NAT traversal is hard.

This is a hard problem. And until now there hasn't been a comprehensive solution for Web Developers.

We are able to reliably connect all kinds of NAT's. For hard-to-hard NATs, we rely on other features of our protocol.

NAT traversal and negotiation are automatically handled, so that app developers do not need to worry about these messy details. That said, all our code is open- source, so we invite you to take a deeper look if you're curious about how we handle these complicated tasks on your app's behalf. Our work builds on a corpus of peer reviewed academia, primarily this paper.

Bad actors are certainly going to try to flood the network with junk, to deny/degrade service (DoS attacks), attack peers (DDoS attacks), etc. How can this P2P network possibly survive such abuse?

The P2P packet relay protocol includes a sophisticated set of balancing techniques, which acts to ensure that no peer on the network places an outsized burden on other peers in the network.

Fluctuations and usage differences of course are a reality, but the protocol naturally resists the kinds of behaviors that bad actors rely on.

We've done a significant amount of modeling simulations and real-world field tests, and we're convinced that these types of attacks will ultimately prove impractical and not affect the ultimate trajectory and growth of our P2P network.

Is this like BitTorrent, Tor, Napster, Gnutella, etc?

The web's roots are P2P, and yes there have been a number of widely known (and sometimes infamous!) attempts to bring the web back to its P2P identity over the years; some good, some not so good. Most of these are focused on file sharing. We see a broader opportunity with P2P which is focused on connectivity, reduced infrastructure cost, and reduced complexity in general.

We think the time has come for the web to return to the P2P model by default, to dismantle the wasteful and unnecessarily complicated (and expensive!) centralization trend that has given rise to the age of the "cloud". There are more than enough consumer devices, many of them highly connected, to accomplish a de-centralization.

While these changes have profound effects on improving how developers and businesses build and deliver experiences to consumers, it's the revolution of a user-centric web that most excites us.

Users don't need all of their data sent up to the cloud, nor do they want that. Users want privacy by default. Users don't need or want to be tracked with every single click or keystroke. Users don't want to wait, staring at spinners, while entire applications full of tens of megabytes of images, fonts, and JS code re-download every single page load. Users don't want or need walled-garden app stores to single-handedly decide what apps they're allowed to access, or how they're allowed to communicate and collaborate using those apps. Users don't want experiences that only work if they have a perfect internet connection, and die or are unavailable when wifi gets spotty.

All of these are hallmarks of the web as it is today, and all of these are tricks designed to work in favor of big centralized companies that slurp up all our data and then charge us rent to hold it. All of these are user-hostile behaviors that for the most part users can't opt out of, but overwhelmingly don't actually want.

Socket is a foundational building block that we believe can help usher in a new age of the web, one that puts users first. One that blurs the lines between websites and apps, and puts all those amazing experiences right on users' devices for them to use instantly, no matter where they are or what kind of internet connection they have (or not!). One that defaults to a local-first (or even local-only!) model that protects users' information by default.

Putting developers in control, and moreover putting users in control, isn't a fad or a phase. We think it's exactly where the web has to go to survive, and we believe it's where everyone that builds for the web will shift to eventually. Those are admittedly pretty big aspirations and goals, but they're far from unrealistic or naive.