Hi, Steemit! We're Textile. Here's a deeper look at the tech behind our Threads protocol.

View this thread on: d.buzz | hive.blog | peakd.com | ecency.com
·@sanderpick·
0.000 HBD
Hi, Steemit! We're Textile. Here's a deeper look at the tech behind our Threads protocol.
<html>
<p><em>Written by </em><a href="https://medium.com/@carsonfarmer"><em>Carson Farmer</em></a><em> &amp; </em><a href="https://medium.com/@sanderpick"><em>Sander Pick</em></a></p>
<p><center><img src="https://cdn-images-1.medium.com/max/800/1*beyK889WqmNEatN55oBAnA.png" width="800" height="565"/><em>Download the app, take a picture, share!</em></center></p>
<p>Recently, &nbsp;we’ve started writing more about the technologies underlying Textile &nbsp;Photos that help keep your photos (and likes and comments, etc) safe and &nbsp;secure on the decentralized web. In our previous post, we talked about <a href="https://medium.com/textileio/the-5-steps-to-end-to-end-encrypted-photo-storage-and-sharing-45ad4aad6b14">the encryption process Textile Photos</a>, &nbsp;with a focus how Textile delivers end-to-end encrypted photo sharing. &nbsp;Today’s post is a follow-up (though it should also be sufficiently &nbsp;detailed to stand on its own), this time highlighting how Textile &nbsp;coordinates private photo sharing among groups of users, a feature we &nbsp;call <em>Threads</em>.</p>
<p><strong>Why we built it</strong><br>
We &nbsp;designed Threads to allow groups of users to share photos securely and &nbsp;privately, without any centralized, authoritative database. We also made &nbsp;sure it all works well offline, that its possible to recover lost data, &nbsp;and that its easy to add new members.</p>
<p><strong>What makes Threads different</strong><br>
Threads &nbsp;allow private groups to post photos and interact over a decentralized &nbsp;network, maintaining complete control over their own content. Textile &nbsp;operates in a completely zero-knowledge framework. Private by design.</p>
<p><strong>Why Threads are exciting</strong><br>
Because &nbsp;photos are just the first step. Today, Threads allow users to share a &nbsp;photo with other Thread in a secure, decentralized way. Threads can &nbsp;facilitate secure sharing, coordination, and storage of <em>many</em> types of data over a decentralized network. Upgradable by design.</p>
<p>On &nbsp;the surface, you can think of each Thread like a decentralized &nbsp;database, shared between specific participants. We built Threads into &nbsp;the fabric of Textile (see what we did there 😉) because group members &nbsp;need a record of who shared what photo, and when. But, once we created &nbsp;Threads, we realized just how powerful a concept this was — for those &nbsp;familiar with mobile app development, think <a href="https://realm.io/">Realm</a> or <a href="https://firebase.google.com/">Firebase</a> but <em>without</em> the centralized server.</p>
<p>To &nbsp;really understand what Threads brings to the table, you really need to &nbsp;understand Threads themselves. So let’s dig a bit deeper into how &nbsp;Textile conceptualizes and implements Threads, and how that helps keep &nbsp;your photos (and likes and comments, etc) safe and secure on the &nbsp;decentralized web. We’ll start by highlighting the specific requirements &nbsp;we had when developing Threads, and then break down each of these &nbsp;requirements into the specific solutions that we came up with. Along the &nbsp;way, our CTO Sander Pick will highlight how those various solutions &nbsp;came about, and why we think our approach is in the best interest of our &nbsp;users.</p>
<h3>The experience</h3>
<p><a href="https://textile.photos/">Textile Photos</a> &nbsp;allows small, decentralized, private groups to share photos, send &nbsp;messages, and engage with each other. That’s the experience, so it has &nbsp;to ‘just work’.</p>
<p><center><img src="https://cdn-images-1.medium.com/max/800/1*xEE3Pp-LdU_r8qsPvMVS8w.gif" width="400" height="600"/></center></p>
<h3>Requirements</h3>
<p>To drive the Textile user experience, we identified five key features needed in the sharing protocols:</p>
<ol>
  <li><strong>A mechanism to share and receive state updates within a group of n users</strong><br>
To &nbsp;enable photo sharing (and other common interactions such as likes and &nbsp;comments) among a group of friends and/or colleagues, some concept of a <em>shared state</em> is required.</li>
  <li><strong>A way to ensure the shared state stays resilient to peers dropping out or latency issues</strong><br>
Since &nbsp;we’re operating in a mobile environment, we have to expect peers to &nbsp;continually drop ‘offline’ due to coverage issues, app back-grounding, &nbsp;battery optimizations, and a whole slew of other reasons for a mobile &nbsp;device to be cut off from a network.</li>
  <li><strong>A way to avoid state conflicts with other members of the group</strong><br>
On &nbsp;top of the requirements above, when peers do come back online, we don’t &nbsp;want any state changes that were made by other members of the group &nbsp;while they were disconnected from the network to conflict with their own &nbsp;local changes.</li>
  <li><strong>A mechanism to recover the full state from the network as a whole</strong><br>
Another &nbsp;important consideration in the mobile world is that the number of users &nbsp;(out of n) that are online at any given time is generally unknown, and &nbsp;quite possibly zero. To reiterate, we want a decentralized shared state, &nbsp;but it has to work <em>even when you are the only member online</em>. &nbsp;This means we have to assume the full group state may not ever be &nbsp;directly accessible (i.e., downloadable) from a single group member. &nbsp;This is in contrast to something like <a href="https://en.wikipedia.org/wiki/Bitcoin">Bitcoin</a>, where new nodes are able to <a href="https://bitcoin.org/en/developer-guide#initial-block-download">download the full blockchain</a> from any connected peer.</li>
  <li><strong>Way to link updates via their content, rather than where they are stored</strong><br>
Since we are building on top of the IPFS network, and would like to eventually support a <a href="https://filecoin.io/">Filecoin</a>-based &nbsp;future in which users can select from a multitude of decentralized &nbsp;storage providers, Threads need to embrace content addressing, rather &nbsp;than location addressing. This makes it easy to grow and change the &nbsp;underlying network, without affecting data access and sharing.</li>
</ol>
<p>With these requirements in mind, let’s break down our solutions into their individual components…</p>
<h3>Solutions</h3>
<h4>1. Handling Updates — use a peer-to-peer network with structured updates</h4>
<p>First things first: <em>how do we handle state updates between a set of distributed peers?</em> This is mostly about <a href="https://en.wikipedia.org/wiki/Peer-to-peer">peer-to-peer (p2p) networking</a>. &nbsp;And when it comes to communicating between heterogeneous network &nbsp;devices (computers, phones, IoT devices, etc), we actually need many <a href="https://en.wikipedia.org/wiki/Lists_of_network_protocols">different types of network protocols</a>. &nbsp;That way, no matter what type of device we are talking about — be it a &nbsp;phone, desktop computer, browser, or Internet-enabled fridge — it is &nbsp;able to communicate with other devices located in the same room, or on &nbsp;the other side of the planet.</p>
<p>At Textile, we use the super amazing <a href="https://libp2p.io/">libp2p</a> &nbsp;library for our networking needs. Libp2p is a networking stack and &nbsp;library (you might have heard it called a protocol suite) modularized &nbsp;out of the <a href="https://ipfs.io/">IPFS project</a>, &nbsp;and bundled separately for other tools to use. Essentially, libp2p does &nbsp;all the heavy network lifting so that we can focus on our core task: &nbsp;exchanging updates between communicating peers.</p>
<p><center><img src="https://cdn-images-1.medium.com/max/800/1*0iCOZW6HrpdQ0izG5O9ciw.png" width="800" height="316"/><em>Decentralized, peer-to-peer networks are radically different types of communication networks.</em></center></p>
<blockquote>Libp2p &nbsp;was a pretty natural choice for us. The stack includes all the crypto &nbsp;and networking protocols we need to deliver messages to group members, &nbsp;and the libp2p developer community is super responsive and excited about &nbsp;the power of p2p interactions. Easy choice.</blockquote>
<p>The &nbsp;other really nice thing about using the libp2p library is it comes &nbsp;packed with many useful cryptography tools and functions, keeping &nbsp;communications secure. For instance, all p2p communications over the &nbsp;Textile network use the <a href="https://github.com/libp2p/go-libp2p-secio">secio</a> <a href="https://github.com/libp2p/go-stream-security">stream security transport</a>. &nbsp;This way, all connections use secure sessions provided by libp2p/secio &nbsp;to encrypt all traffic, whereby a TLS-like handshake is used to setup &nbsp;the initial communication channel.</p>
<p>Like many IPFS-based projects, Textile uses <a href="https://developers.google.com/protocol-buffers/">Protocol Buffers</a> &nbsp;for over-the-wire communication, and advanced cryptographic algorithms &nbsp;to secure those messages. Essentially, each update to the shared group &nbsp;state is just an encrypted Profobuf message with two parts: a header &nbsp;with author and date info, and a body with the type-specific data. These &nbsp;pieces are sent in their own inner-’envelope’ which contains a link to &nbsp;the encrypted message and the Thread ID. This inner-envelope is then &nbsp;signed by the sender and placed into the wire ‘envelope’ along with it’s &nbsp;signature. You can read more about some of the cryptographic tools &nbsp;Textile uses in <a href="https://medium.com/textileio/the-5-steps-to-end-to-end-encrypted-photo-storage-and-sharing-45ad4aad6b14">this previous article</a>. You can also check out how we structure our <a href="https://github.com/textileio/textile-go/blob/master/pb/protos/thread.proto">Protobuf messages</a>, learn a bit more about <a href="https://github.com/auditdrivencrypto/secure-channel/blob/master/prior-art.md#ipfss-secure-channel">how secio works</a>, plus check out some <a href="https://github.com/textileio/textile-go/commit/05a269cd5dd72da224c4d2c472abad00a078d4ca">recent updates</a> to message encryption while you’re at it.</p>
<h4>2. Network Resilience — support offline messaging so peers can come and&nbsp;go</h4>
<p>If you are at all familiar with libp2p, then you might be thinking <em>“ah libp2p has a pubsub layer that would be perfect for exchanging updates to a group of connecting peers”</em>. &nbsp;And while you’d certainly be right, there are a few key limitations &nbsp;that makes using pubsub for something like Textile Photos pretty &nbsp;cumbersome. On top of this, while pubsub is super nice for things like &nbsp;chat rooms or distributed services, it is a ‘fire-and-forget’ messaging &nbsp;protocol, meaning that once a peer publishes a message, it is up to its &nbsp;peers to ensure they are listening for the right message at the right &nbsp;time. To circumvent this, some pubsub systems introduce message echoing, &nbsp;to ensure a message stays in the system long enough to be picked up by &nbsp;the peers who might need it. However, this can lead to really noisy &nbsp;network traffic, and is really just a band-aid over a larger issue.</p>
<blockquote>Our &nbsp;initial POC involved pubsub and always-online room echoers… not &nbsp;scalable or particularly decentralized. A real solution to distributing &nbsp;state has to involve direct messaging with an offline mechanism.</blockquote>
<p>So this starts to get at our second requirement, that <em>the shared state stays resilient to peers dropping out</em>. &nbsp;We need to assume peers might not be around to receive important &nbsp;messages in ‘real-time’, which is a common problem with p2p systems. &nbsp;Right now, Textile addresses this problem by enabling what you might &nbsp;call <em>offline messaging</em>. Since we’re <a href="https://medium.com/textileio/the-5-steps-to-end-to-end-encrypted-photo-storage-and-sharing-45ad4aad6b14#ea20">already using IPFS for data storage and communication</a>, &nbsp;we wanted to take advantage of some of the core technologies driving &nbsp;IPFS. In particular, we (currently) use a special fork of the <a href="https://en.wikipedia.org/wiki/Kademlia">Kademlia</a>-based <a href="https://en.wikipedia.org/wiki/Distributed_hash_table">distributed hash table</a> (DHT) <a href="https://github.com/libp2p/go-libp2p-kad-dht">used by IPFS, </a>that allows us to post messages for a peer directly in the DHT. For those unfamiliar with DHTs, they are a <a href="https://en.wikipedia.org/wiki/Hash_table">hash table</a> &nbsp;where the data is spread across a network of nodes or peers. And these &nbsp;peers are all coordinated to enable efficient access and lookup between &nbsp;nodes in a decentralized way. You can read more about this kind of stuff &nbsp;in our previous article about <a href="https://medium.com/textileio/swapping-bits-and-distributing-hashes-on-the-decentralized-web-5da98a3507">how IPFS peers find, request, and retrieve content (and each other) on the decentralized web</a>. &nbsp;So, when a peer we want to communicate with is offline, rather than &nbsp;blindly sending them a message that will never be received, we post a &nbsp;message to Textile’s DHT, and they can then retrieve that message the &nbsp;next time they come online again. Conceptually simple, and works pretty &nbsp;well in practice.</p>
<p><center><img src="https://cdn-images-1.medium.com/max/800/1*JsXZAveyMui2izg_-FMh0g.png" width="800" height="485"/><em>p2p &nbsp;network with custom Textile DHT overlay. Peers post (key, value) &nbsp;messages (value) with a key specific to their intended recipient, and &nbsp;this key is broadcast and available to entire network; though only the &nbsp;intended recipient is able to decrypt the actual message content. Based &nbsp;on Figure 1–2 from </em><a href="https://www.researchgate.net/figure/2-Overlay-and-underlay-view-of-Distributed-Hash-Tables-11_fig2_304495476"><em>this&nbsp;thesis</em></a><em>.</em></center></p>
<p>There &nbsp;are still some issues with our current approach, including that it is &nbsp;difficult/impossible to remove messages from the DHT manually. Indeed, &nbsp;it can start to get a bit messy when left-over offline messages have to &nbsp;be retrieved each time a peer comes back online… imagine a peer that &nbsp;goes in and out of service frequently, this could lead to a lot of &nbsp;network traffic and wasted CPU cycles. So, we’ve <a href="https://github.com/libp2p/notes/issues/2#issuecomment-433729343">implemented an alternative</a> &nbsp;to this DHT-based offline messaging system that does not suffer from &nbsp;these limitations (and also allows us to participate in the public IPFS &nbsp;network), while still remaining decentralized and scalable in the &nbsp;long-term. This new approach should be released soon, after more testing &nbsp;and evaluation. You can follow along with this progress as part of the <a href="https://github.com/textileio/textile-go/projects/2">move towards a Cafe-based setup</a> (see also <strong>What’s Next</strong>).</p>
<h4>3 &amp; 4. Avoiding Conflicts &amp; State Recovery —use a CRDT to keep an immutable history across&nbsp;peers</h4>
<p>Ok, so our next requirement and its associated solution have received a <a href="https://github.com/ipfs/research-CRDT/">great deal of research and development attention over the years</a>. The question of “<em>how to avoid state conflicts with other members of a group?”</em> &nbsp;comes up when working collaboratively on documents, updating shared &nbsp;databases, etc. For the purposes of updating a shared Thread of photos, &nbsp;it turns out that an <a href="https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type">operation-based CRDT</a> &nbsp;that supports append-only operations is pretty much all you need to get &nbsp;going. You can think of Textile’s CRDT (which shares some ideas with <a href="https://github.com/orbitdb/ipfs-log">ipfs-log</a>) &nbsp;setup as an immutable, append-only tree that can be used to model a &nbsp;mutable, shared state between peers. Every entry in the tree is saved on &nbsp;IPFS, and each points to a hash of previous entry(ies) forming a graph. &nbsp;These trees can be <a href="https://www.atlassian.com/git/tutorials/using-branches/git-merge">3-way and fast-forward merged</a>.</p>
<p>Speaking &nbsp;of forks and joins, for those familiar with git and other similar &nbsp;system, you might be thinking this sounds a lot like a <a href="https://blog.thoughtram.io/git/2014/11/18/the-anatomy-of-a-git-commit.html#meet-the-tree-object">git hash tree</a>, <a href="https://twitter.com/Textile01/status/1004436869734543360">Merkle DAG</a>, or even a <a href="https://en.wikipedia.org/wiki/Blockchain">blockchain</a>. &nbsp;And you’d be right! The concepts are very similar, and this buys us &nbsp;some really nice properties for building and maintaining a shared state. &nbsp;By modeling our shared Thread state in this way, we benefit from tried &nbsp;and tested methods for allowing a peer to incorporate other peers’ &nbsp;updates into their state while maintaining history (via fast-forwards &nbsp;and three-way merging for example).</p>
<blockquote>At the end of the day, a Thread is just a git-like hash tree of updates with a deterministic merge policy. Simple.</blockquote>
<p>So &nbsp;what does this look like in practice? Currently — because things might &nbsp;change as we make improvements to the underlying implementation — each &nbsp;Thread in Textile Photos is essentially a chain of updates, where each &nbsp;update represents some specific action or event. For instance, when you &nbsp;create a new Thread, under-the-hood you are actually creating a <code>JOIN</code> update on a new Thread chain. Similarly, when you update the Thread via a new photo (<code>DATA</code> update), comment, or like (<code>ANNOTATION</code> update), you’re actually updating that Thread chain. After each modification, the <code>HEAD</code> of the Thread will point to the latest update.</p>
<p>Building on top of these ideas, we also have concepts such as an <code>INVITE</code>, which points a new peer to a given point on the Thread chain, or a <code>MERGE</code>, which happens when the current <code>HEAD</code> &nbsp;is not contained in an incoming update’s parent list for some reason &nbsp;(maybe the peer doesn’t know about it because they were offline). If two &nbsp;peers are merging the <em>same sub trees</em>, &nbsp;all they need to do to ensure the update resolves to the same hash is &nbsp;a) include the same date b) exclude author info. To get the same date, &nbsp;they both follow a rule: choose the latest of the parents for the date &nbsp;(in practice they add a little bit extra on to keep it ahead of both &nbsp;parents).</p>
<p>To give you a better idea of what exactly we’re talking about, consider the following set of operations: <code>User A</code> creates a new Thread, and adds a Photo. They then externally invite <code>User B</code> (sent via some other <em>secure</em> communication channel), who eventually joins the Thread. But before <code>User B</code> is able to join the Thread, <code>User A</code> adds another Photo, moving the Thread’s <code>HEAD</code> forward. By the time <code>User B</code> joins the Thread, they’d end up with a Thread sequence that looks something like this:</p>
<p><center><img src="https://cdn-images-1.medium.com/max/800/1*h9eNwOauuT7mMeS4o-yGbw.png" width="800" height="478"/><em>Thread &nbsp;join example. Solid arrows point towards the ‘parent’ of a given &nbsp;update, over-the-wire communications are indicated with a 📶-style &nbsp;arrow, and messages that are rebroadcast (e.g., via the welcome message) &nbsp;are indicated with a dashed arrow. Similarly, merges point to both &nbsp;their parent&nbsp;updates.</em></center></p>
<p>Here, &nbsp;we see the merge happening at the end of the sequence because the &nbsp;bottom peer is joining via an external invite that is no longer <code>HEAD</code>&nbsp;, forcing them to merge the most recent <code>DATA</code> update with their own <code>JOIN</code> update. But since merge results are deterministic (given the same parents), both peers create the <code>MERGE</code> update locally, and do not broadcast them to avoid trading merges back and forth.</p>
<p>A more complete sequence is given in the following figure. Suppose <code>User A</code> &nbsp;goes ‘offline’ (e.g., their phone goes to sleep, they shut down the &nbsp;app, they lose their data connection, etc), and in the mean time, both<code> Users A</code> and <code>B</code> update the Thread, with <code>User A</code> adding an <code>ANNOTATION</code> update, and <code>User B</code> adding a new Photo (<code>DATA</code> update). Now, when <code>User A</code> comes back online, there is a conflict, and both Users create a <code>MERGE</code> update to remedy this. A <code>MERGE</code> update has two parents, in this case, the <code>DATA</code> and <code>ANNOTATION</code> update from the different users. As always, the <code>HEAD</code> continues to point to the latest update (which in the example below eventually becomes an <code>ANNOTATION</code> from <code>User B</code>). Once both peers are online again, the more straightforward update and transmit mode of operation can continue.</p>
<p><center><img src="https://cdn-images-1.medium.com/max/800/1*3EuEtFHqUtALTczIeZ8_eg.png" width="800" height="473"/><em>More &nbsp;complex Thread interaction where one or more peers are temporarily &nbsp;offline. Note that an external invite is the same as a normal invite, &nbsp;but the invite details are encrypted with a single use key, which is &nbsp;sharable with the invite update location.</em></center></p>
<p>The &nbsp;same properties that make hash trees or blockchains useful for &nbsp;developing a shared, consistent (consensus-driven) state, also makes it &nbsp;possible to address our fourth requirement: <em>the ability to recover the full state from the network as a whole</em>. &nbsp;Because each Thread update references its parent(s), given a single &nbsp;point on the Thread chain, we can trace back all the way to the &nbsp;beginning of the Thread. For example, at any point along the sequence in &nbsp;the above figures, a peer can trace back the history of the Thread, as &nbsp;indicated by the solid arrows. This works particularly nicely when a &nbsp;peer <code>JOIN</code>s a thread, even at a point prior to the current <code>HEAD</code>. They can simply <code>JOIN</code>, and any existing Thread member can send them the latest <code>HEAD</code> &nbsp;(even via offline messages if needed). From here, they can explore the &nbsp;entire history of the Thread with ease. This is all really similar to &nbsp;git commit speak, in which one only needs to know about a single commit &nbsp;to be able to trace back the entire history of a code project; it’s also &nbsp;essentially how blockchains work.</p>
<h4>5. Content Addressing — store everything on IPFS and get ready to&nbsp;scale</h4>
<p>As &nbsp;we alluded to earlier, each update to a Thread is backed by an IPFS CID &nbsp;hash (i.e., they are content addressable chunks of data on IPFS). This &nbsp;means <em>where</em> the data is stored &nbsp;is no longer relevant… IPFS will find it on the network via it’s hash. &nbsp;This helps us address our fifth requirement, that we have a <em>way to link updates via their content, rather than where they are stored</em>. We’ve covered this topic a lot <a href="https://medium.com/textileio">in the past</a>, but for the uninitiated, the next paragraph provides a summary of how content addressing on IPFS works (pulled from <a href="https://medium.com/textileio/enabling-the-distributed-web-abf7ab33b638">this previous article</a>).</p>
<p>Rather than referencing a file or chunk of data by its location (think HTTP), we reference it via its <a href="https://en.wikipedia.org/wiki/Fingerprint_%28computing%29">fingerprint</a>. In IPFS and other such systems, this means identifying content by its cryptographic hash, or even better, a <a href="https://github.com/ipld/cid"><em>self-describing </em>content-addressed identifier</a> (<a href="https://github.com/multiformats/multihash">multihash</a>). A cryptographic hash is a (relatively) short alphanumeric string that’s calculated by running your content through a <a href="https://en.wikipedia.org/wiki/Cryptographic_hash_function">cryptographic hash function</a> (like <a href="https://en.wikipedia.org/wiki/SHA-3">SHA</a>). For example, when the (unencrypted) <a href="https://ipfs.io/ipfs/QmcNzGMQ1hUzSnqM2aH1Um8KWLRC4Rd9TyY4kFuYZXWXaF/Textile_Icon_500x500.png">Textile logo</a> is added to IPFS, its multihash ends up being <code>QmbgGgWW3vH7v9FDxVCzcouKGChqGEjtf6YLDUgSHnk5J2</code>. This ‘hash’ is actually the CID (Content IDentifier) for that file, computed from the <em>raw data </em>within that PNG. It is guaranteed to be cryptographically unique to the contents of <em>that </em>file, and that file only. If we change that file by even one bit, the hash will become something completely different.</p>
<p>Now, &nbsp;when we want to access a file over IPFS (like the above logo), we can &nbsp;simply ask the IPFS network for the file with that exact CID, the &nbsp;network will find the peers that have the data (using a DHT), retrieve &nbsp;it, and verify (using the CID) that it’s the correct file. What this &nbsp;means is we can technically get the file from <em>multiple </em>places &nbsp;because as long as the file matches the hash, we know we’re getting the &nbsp;right data. Which brings us to the solution to our final requirement… &nbsp;use IPFS! For now, Textile is maintaining a network of large, &nbsp;homogeneous, volunteer nodes (we call them <a href="https://github.com/textileio/textile-go">Cafe</a>s) &nbsp;to ‘pin’ and store content on IPFS. It is important to note here that &nbsp;the other nodes doing the pinning are the same as the nodes on your &nbsp;phone — Textile Nodes that offer a pinning service to other peers. Soon, &nbsp;we’ll allow users to elect their own Cafe nodes, add even add &nbsp;additional nodes for redundancy. All this could eventually be driven by &nbsp;Filecoin for even greater scalablility and flexibility.</p>
<h3>What’s Next?</h3>
<p>So &nbsp;there you have it. Five solutions to five requirements for seamless, &nbsp;secure, decentralized photo sharing and backup. Easy 😉. And at a &nbsp;conceptual level, the Textile Thread protocol <em>is</em> &nbsp;relatively simple: blocks of operations chained together to produce a &nbsp;beautiful Thread of photos. But there’s a lot of complexity going on &nbsp;under-the-hood that has required a lot of experimentation, testing, and &nbsp;limit pushing, especially on mobile. And our journey isn’t over yet.</p>
<p><a href="https://twitter.com/Textile01/lists/textile-team/members">The Textile team</a> &nbsp;is still hard at work iterating, updating, and improving upon what we &nbsp;already have working. For example, we’ll soon to moving to a new offline &nbsp;messaging system that allow us to drop the custom DHT fork, and move &nbsp;back to the public IPFS network. On top of this, our <a href="https://github.com/textileio/textile-go/projects/2">move to more powerful backup</a> &nbsp;and recovery capabilities has us taking new approaches to security, &nbsp;profile management, offline interactions, and much much more. On top of &nbsp;these changes, the team is actively working to modularize the Threads &nbsp;concept and code into its own stand-alone package, which should provide &nbsp;developers with something akin to a Realm and/or Firebase layer for &nbsp;decentralized mobile applications!</p>
<p>If you are interested in learning more about this stuff, reach out over <a href="https://twitter.com/Textile01">Twitter</a> or <a href="https://slack.textile.io/">Slack</a>, &nbsp;or pull us aside the next time you see us at a conference or event. We’re happy to provide background, thoughts, and opinions on how we &nbsp;think the future of decentralized apps will play out. In the mean time, &nbsp;don’t forget to check out our <a href="https://github.com/textileio">GitHub repos</a> &nbsp;for code and PRs that showcase our current and old implementations. We &nbsp;try to make sure all our development happens out in the open, so you can &nbsp;see things as they develop. Additionally, if you haven’t already, <a href="https://www.textile.photos/#cta">don’t miss out on signing up for our waitlist</a>, where you can get early access to Textile Photos, the beautiful interface to Textile’s Threads.</p>
</html>
👍 , , , , , , , , ,