Eclair #927 adds support for plugins written in Scala, Java, or a JVM-compatible language. Plugins are implementations of the plugin interface. See the newly-added documentation for details.
Eclair #951 implements a channel backup mechanism and provides documentation for using it. Unlike the LND static channel backups described earlier in this newsletter, this needs to be backed up after every payment. A configuration option allows Eclair to call a script you specify to automatically handle backing up the data file whenever a backup is needed.
Eclair #885 adds a single UUID-style identifier for tracking payments no matter what HTLCs are used in relation to it, allowing simplified tracking of whether the payment itself ultimately succeeded or failed. This addresses the case where the program automatically retries sending a temporarily-failed payment using a different route and so generates non-ultimate failures and other information that may not be useful to a high-level API consumer. Although there are differences in implementation and motivation, this seems conceptually related to C-Lightning #2382 as described in the notable code changes section of Newsletter #36 in two separate bullet points.
C-Lightning PRs #2541, #2545, and #2546 implement multiple changes to the gossip subsystem used for tracking which channels are available and calculating routes across them. This work was motivated by the million channels project and performance results from that project are included in many of the commit messages. If Optech is interpreting the results correctly, the difference between the first commit in the series and the last commit is an 79% reduction in memory use, from 2.6 GiB to 0.6 GiB, and an 80% reduction in the time to build a route to a randomly-selected node (within 20 hops) from 60 seconds to 12 seconds. (If even the improved values seem high, recall this is for a simulation network more than 25 times the size of the current mainnet network and 1,000 times the size of the network a bit over a year ago.) A notable part of this change is C-Lightning switching from its rather unique Bellman-Ford-Gibson (BFG) routing algorithm to a slightly-customized version of Dijkstra.
C-Lightning #2554 changes the default invoice expiration from one hour to one week. This is the time after which the node will automatically reject attempts to pay the invoice. Services that want to minimize exchange rate risk will need to pass a lower expiry value when using the invoiceRPC.
C-Lightning #2540 adds an invoice hook that’s called whenever a “valid payment for an unpaid invoice has arrived.” Among other tasks that can be performed when a payment is received, this can be used by a plugin to implement “hold invoices” as previously implemented in LND (see our description of LND #2022 in Newsletter #38).
LND #2933 adds a document describing LND’s current backup and recovery options.
C-Lightning #2506 adds a min-capacity-sat configuration parameter to reject channel open requests below a certain value. This replaces a hardcoded minimum of 0.00001 BTC previously in the code.
LND #2313 implements code and RPCs that allow LND nodes to use static channel backups. This is based on the Data Loss Protection (DLP) protocol implemented in LND #2370 to allow backing up a single file containing all of your current channel state at any point and then enabling restoring from that file at any later point to get your remote peer to help you to close any of those channels in their latest state (excluding unfinalized routed payments (HTLCs)). Note: despite the “static” in this feature’s name, this is not like an HD wallet one-time backup. It’s a backup that needs to be done at least as often as each time you open a new channel—but that’s much better than the current state where you may not be able to recover any funds from any of your channels if you lose data. Further improvements to backup robustness are mentioned in the PR’s description. See the description of LND #2370 in Newsletter #31 for more details on how DLP-based backup and recovery works. Getting this major improvement to backups merged was one of the major goals for upcoming LND version 0.6-beta.
LND #2740 implements a new gossiper subsystem which puts its peers into two buckets, active gossiper or passive gossiper. Active gossipers are peers communicating in the currently normal way of sharing all of their state with your node; passive gossipers are peers from which you will only request specific updates. Because most active gossipers will be sending you the same updates as all other gossipers, having more than a few of them is a waste of your bandwidth, so this code will ensure that you get a default of 3 active gossipers and then put any other gossipers into the passive category. Furthermore, the new code will try to only request updates from one active gossiper at a time in round-robin fashion to avoid syncing the same updates from different nodes. In one test described on the PR, this change reduced the amount of gossip data requested by 97.5%.
LND #2885 changes how LND attempts to reconnect to all of its peers when coming back online. Previously it attempted to open connections to all its persistent peers at once. Now it spreads the connections over a 30 second window to reduce peak memory usage by about 20%. This also means that messages that are sent on a regular interval, such as pings, do not happen at the same time for all peers.
Eclair #894 replaces the JSON-RPC interface with an HTTP POST interface. Instead of RPC commands, HTTP endpoints are used (e.g. the
channelstats RPC is now
POST http://localhost:8080/channelstats). Parameters are provided to the endpoint using named form parameters with the same JSON syntax as used with RPC parameters. Returned results are identical to before the change. The old interface is still available using the configuration parameter
eclair.api.use-old-api=true, but it is expected to be removed in a subsequent release. See the updated API documentation for details.
LND #2759 lowers the default CLTV delta for all channels from 144 blocks (about 24.0 hours) to 40 blocks (about 6.7 hours). When Alice wants to pay Zed through a series of routing nodes, she starts by giving money to Bob under the terms that either Alice can take it back after (say) 400 blocks or Bob can claim the money before then if he can provide the preimage for a particular hash (the key that opens a hashlock). The 400 block delay is enforced onchain if necessary using
OP_CHECKLOCKTIMEVERIFY (CLTV). Bob then sends the money (minus his routing fee) to Charlie with similar terms except that the CLTV value is reduced from Alice’s original 400 blocks by the CLTV delta of his channel with Charlie, reducing the value to 360 blocks. This ensures that if Charlie waits the maximum time to fulfil his HTLC to Bob and claim his payment (360 blocks), Bob still has 40 blocks to claim his payment from Alice by fulfilling the original HTLC. If Bob’s HTLC expiry time with Charlie wasn’t reduced at all and used a 400 block delay, Bob would be at risk of losing money. Charlie could delay fulfilling his HTLC until 400 blocks, and Alice could then cancel her HTLC with Bob before Bob had time to fulfil the HTLC.
Subsequent routers each successively subtract their delta from the value of the terms they give to the next node in the route. Using a high CLTV delta therefore reduces the possible number of hops that can be used in a route, and makes a channel less attractive for use when routing payments.
Eclair #826 updates Eclair to be compatible with Bitcoin Core 0.17 and upcoming 0.18, dropping support for 0.16.
C-Lightning #2470 modifies the recently-added setchannelfee RPC so that “all” can be passed instead of a specific node’s id in order to set the routing fee for all channels.
LND #2691 increases the default address look-ahead value during recovery from 250 to 2,500. This is the number of keys derived from an HD seed that the wallet uses when rescanning the block chain for your funds. Previously, if your node gave out more than 250 addresses or pubkeys without any of them being used, your node would not find your complete balance on its first rescan, requiring you to initiate additional attempts. Now, you’d need to give out more than 2,500 addresses before reiteration might become necessary. An earlier version of this PR wanted to set this value to 25,000, but there were concerns that this would significantly slow down rescanning with the BIP158 Neutrino implementation, so the value was decreased until it could be shown that people needed a value that high. (Note: checking addresses against a BIP158 filter is very fast by itself; the problem is that any match requires downloading and scanning the associated block—even if it’s a false-positive match. The more addresses you check, the greater the number of expected false positives, so scanning becomes slower and requires more bandwidth.)
LND #2765 changes how the LN node responds to channel breaches (attempted theft). Previously, if an attempted breach was detected, the node created a breach remedy transaction to collect all funds associated with that channel. However, when users start using watchtowers, the watchtower may create a breach remedy transaction but not include all the possible funds. (This doesn’t mean the watchtower is malicious: your node may simply not have had a chance to tell the watchtower about the latest commitments it accepted.) This PR updates the logic used to generate the breach remedy transaction so that it only collects the funds that haven’t been collected by prior breach remedy transactions, allowing recovery of any funds the watchtower didn’t collect.