Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion OSlibs/android/Android_Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ source set_android_env.sh

`echo $AR`

- If getting output your Android NDK developement environment is set temporarily in terminal window in which you executed the set_android_env.sh script.
- If getting output your Android NDK development environment is set temporarily in terminal window in which you executed the set_android_env.sh script.


## Compile iguana for android
Expand Down
8 changes: 4 additions & 4 deletions OSlibs/ios/iOS_Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,18 +6,18 @@

## Compile iguana for iOS

- Get SuperNET repository clonned on your machine with command
- Get SuperNET repository cloned, clowned, conned on your machine with command

`git clone https://github.com/jl777/SuperNET`

- Change your directory to the clonned SuperNET and execute the following commands:
- Change your directory to the cloned, clowned, conned SuperNET and execute the following commands:

`./m_onetime m_ios`

`./m_ios`

- You'll find `libcrypto777.a` and `iguana` for iOS in agents directory inside SuperNET repo clonned dir.
- To check if the files are for iOS platform, you can execute the folowing command which will show a result something like this:
- You'll find `libcrypto777.a` and `iguana` for iOS in agents directory inside SuperNET repo cloned, clowned, conned dir.
- To check if the files are for iOS platform, you can execute the following command which will show a result something like this:

`cd agents`

Expand Down
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,9 @@ gecko: abstracted bitcoin compatible blockchains that run via basilisk lite mode

basilisk: abstracted crypto transactions layer, which has a reference implementation for bitcoin protocol via the iguana nodes, but can be expanded to support any coin protocol that can support the required functions. Since it works with bitcoin protocol, any 2.0 coin with at least bitcoin level functionality should be able to create a basilisk interface.

iguana: most efficient bitcoin core implementation that can simultaneously be full peers for multiple bitcoin blockchains. Special support being added to virtualize blockchains so all can share the same peers. The iguana peers identify as a supernet node, regardless of which coin, so by having nodes that support multiple coins, supernet peers are propagated across all coins. non-iguana peers wont get any non-standard packets so it is interoperable with all the existing bitcoin and bitcoin clone networks
iguana: most efficient bitcoin core implementation that can simultaneously be full peers for multiple bitcoin blockchains. Special support being added to virtualize blockchains so all can share the same peers. The iguana peers identify as a supernet node, regardless of which coin, so by having nodes that support multiple coins, supernet peers are propagated across all coins. non-iguana peers won't get any non-standard packets so it is interoperable with all the existing bitcoin and bitcoin clone networks

komodo: this is the top secret project I cant talk about publicly yet
komodo: this is the top secret project I can't talk about publicly yet

> # TL;DR
>
Expand Down Expand Up @@ -145,8 +145,8 @@ Loretta:/Users/volker/SuperNET/includes # ln -s ../osx/libsecp256k1 .
3.) I had to change ulimit
During the syncing, I have many, many messages like this:
>>
>> cant create.(tmp/BTC/252000/.tmpmarker) errno.24 Too many open files
>> cant create.(tmp/BTC/18000/.tmpmarker) errno.24 Too many open files
>> can't create.(tmp/BTC/252000/.tmpmarker) errno.24 Too many open files
>> can't create.(tmp/BTC/18000/.tmpmarker) errno.24 Too many open files
>>
Loretta:/Users/volker/SuperNET # ulimit -n 100000

Expand Down
4 changes: 2 additions & 2 deletions iguana/Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ The following are the second pass data structures that are created from a batch

I tried quite a few variations before settling on this. Earlier versions combined everything into a single dataset, which is good for making searches via hashtable really fast, but with the ever growing size of the blockchain not very scalable. The maximum size of 2000 blocks is 2GB right now and at that size there is no danger of overflowing any 32bit offset, but for the most part, the 32bit indexes are of the item, so it can represent much larger than 4GB.

iguana doesnt use any DB as that is what causes most of the bottlenecks and since the data doesnt change (after 20 blocks), a DB is just overkill. Using the memory mapped file approach, it takes no time to initialize the data structures, but certain operations take linear time relative to the number of bundles. Achieving this performance requires constant time performance for all operations within a bundle. Since most bundles will not have the hash that is being searched for, I used a bloom filter to quickly determine which bundles need to be searched deeper. For the deeper searches, there is a open hashtable that always has good performance as it is sized so it is one third empty. Since the total number of items is known and never changes, both the bloom filters and hashtable never change after initial creation.
iguana doesn't, does not use any DB as that is what causes most of the bottlenecks and since the data doesn't, does not change (after 20 blocks), a DB is just overkill. Using the memory mapped file approach, it takes no time to initialize the data structures, but certain operations take linear time relative to the number of bundles. Achieving this performance requires constant time performance for all operations within a bundle. Since most bundles will not have the hash that is being searched for, I used a bloom filter to quickly determine which bundles need to be searched deeper. For the deeper searches, there is a open hashtable that always has good performance as it is sized so it is one third empty. Since the total number of items is known and never changes, both the bloom filters and hashtable never change after initial creation.

What this means is that on initialization, you memory map the 200 bundles and in the time it takes to do that (less than 1sec), you are ready to query the dataset. Operations like adding a privkey takes a few milliseconds, since all the addresses are already indexed, but caching all the transactions for an address is probably not even necessary for a single user wallet use case. However for dealing with thousands of addresses, it would make sense to cache the lists of transactions to save the few milliseconds per address.

Expand All @@ -140,7 +140,7 @@ I had to make the signatures from the vinscripts purgeable as I dont seem much u

It is necessary to used an upfront memory allocation as doing hundreds of millions of malloc/free is a good way to slow things down, especially when there are many threads. Using the onetime allocation, cleanup is guaranteed to not leave any stragglers as a single free releases all memory. After all the blocks in the bundle are processed, there will be a gap between the end of the forward growing data called Kspace and the reverse growing stack for the sigs, so before saving to disk, the sigs are moved to remove the gap. At this point it becomes clear why it had to be a reverse growing stack. I dont want to have to make another pass through the data after moving the signatures and by using negative offsets relative to the top of the stack, there is no need to change any of the offsets used for the signatures.

Most of the unspents use standard scripts so usually the script offset is zero. However this doesnt take up much room at all as all this data is destined to be put into a compressed filesystem, like squashfs, which cuts the size in about half. Not sure what the compressed size will be with the final iteration, but last time with most of the data it was around 12GB, so I think it will end up around 15GB compressed and 25GB uncompressed.
Most of the unspents use standard scripts so usually the script offset is zero. However this doesn't, does not take up much room at all as all this data is destined to be put into a compressed filesystem, like squashfs, which cuts the size in about half. Not sure what the compressed size will be with the final iteration, but last time with most of the data it was around 12GB, so I think it will end up around 15GB compressed and 25GB uncompressed.

Each bundle file will have the following order:
[<txid> <vouts> <vins>][nonstandard scripts and other data] ... gap ... [signatures]
Expand Down