Saturday, September 19, 2015

The Demons of MiraCL

Once upon a time, an old and experienced warrior set out on a grand quest towards assembling The Scroll of Distributed Identity Based Encryption. In his home village, he had equipped himself with his trusted weapons, said a prayer for his fallen predecessors and stepped out of the village.

Go ahead and listen to some background music while reading:

First he had to fight his way across the Plains of Emptiness. Whispering ancient mantras, he wielded his weapons to prevent the Ghosts of Void from taking his soul away from this world all the while as he was making his way towards the castle named Prototype. Six days he fought and on the seventh day he rested as the Test Suite Stream ran softly under the windows of the castle and all was well.

That was, however, not the end of the quest. After consulting the the Gods he realized that he'll have to part with his old companion. For the quest ahead of him was foretold to be completed with a different tool, a weapon so powerful and ancient, that only silent whispers about the miracles it can perform roamed the world. The warrior eagerly set out on the next part of the quest, paying little heed to searching the lore for stories about his new tool.

Alas, that put him at peril, right at the gate of his castle, where he needed to defeat a many-headed beast but he found the MiraCL could only deal with a fixed size numbers and any larger number, though named Big, would end in the weapon breaking completely and dumping filthy core dumps at him. After trying a few more times, he found he could get a warning from MiraCL that the numbers were too big but not always. He was confused. How could a number be "too big" for such a mighty tool?  Courageously, he peered inside MiraCL to find the answer but found nothing but dismay.

For you see, MiraCL is an ancient weapon, forged by the Elders long before humans walked the Earth and thus is not for a mere mortal to understand. In these times, source code comments were pure blasphemy and also memory was limited and thus The Elders decreed that only 3-letter variable names shall be used. Optical illusions, out of grasp of mere men, were abundant. Different objects with the same name appeared out of nothing and weird macros obscured his vision. Wearily, he drove the beast away and set down to meditate.

In his meditations, he saw the Lore of MiraCL in front of his eyes and quickly understood that should have been the first place to look. For MiraCL was a thing out of this world and could not be wielded by mere mortals without peril.

This also gave him the knowledge on how to combine MiraCL and the mighty Address Sanitizer, his indispensable light in the Darkness of Memory Access, guiding his step away from the gaping chasms of Segfault. To forge MiraCL together with Address Sanitizer, there are two options. One is to abolish the Assembly Script and use only pure C for summoning. To do that, add to config.h this option:

#define MR_NOASM

and omit mrmuldv.c from your build spell.

Another way to forge MiraCL and the mighty Asan allows the use of Assembly but requires the 64b system. For that, invoke the following demonic spell inside the guts of Assembly implementation file, mrmuldv.g64:


#if defined(__clang__) || defined (__GNUC__)
# define ATTRIBUTE_NO_SANITIZE_ADDRESS __attribute__((no_sanitize_address))
#else
# define ATTRIBUTE_NO_SANITIZE_ADDRESS
#endif


and bless every function in that file with that attribute.

However, this was not the end of the quest, far from it. Our warrior went back to the Prototype castle and started replacing the OpenSSL in construction with MiraCL as was foretold. He worked hard, day and night, painstakingly looking at each brick, each beam. Once everything was in place, he rested and looked at the Test Suite Stream. But what horror he saw there! The stream was not crystal clear water as it once was but a stream of pure blood, splashing around, staining the walls. What once was in harmony with OpenSSL, now was in shatters. An ancient curse in the heart of MiraCL perhaps?

The warrior had no other choice but to fight any curse because it was his destiny. And so he toiled on. He separated the stream into smaller parts, let it go through a part of the castle only. He let it test one part at a time to see the result of each. After much work, he could see which rooms of the castle left each of the little trickles crystal clear and which turned it nasty. Divide and conquer, understand parts separately and know when they work. Then you can rely on them and use them in fight to achieve correctness of larger components. He still remembered the teachings of his Temple well.

But the curse was not so simple. He saw many streams running clear but once he put them together, suddenly all individual streams turned to blood! He could go through everything, seeing nothing but harmony but on the way back from the last room, everything would be in ruin again. The warrior could not believe his eyes. How was this possible? He inspected everything in detail again until he found it. He wept for hours, for with MiraCL he has awakened an evil ancient curse, thought to be long gone from our world. The Curse of Shifting Global State. Indeed, somewhere, somehow, MiraCL would shift and make all his work worthless.

But he persisted, as was foretold. He knew the curse had to be stopped. Covered in bloody mud, he rose again, the flaming sword of Divide & Conquer in his hands. Unresting, he slashed and slashed through. He found the time and place of the shift. He watched the shift occur. It was a call to powmod() which, behind his back, changed everything in the castle into ruin. The warrior couldn't believe what was happening. The powmod() function looked like a little harmless bird at first. Who would imagine it causing such a havoc? And yet, it managed to trash the whole building. Such was the Curse of Shifting Global State.

He waded through the misty Source of MiraCL, doing his best to decipher hidden meaning and ignoring any illusions of obfuscated C. There he saw that powmod() partners with prepare_monty() in its evil deed of changing a global parameter. That parameter was, however, crucial for the representation of his elliptic curves. When changed, the curves would collapse into singularity.


powmod() assumes that numbers use a Montgomery n-residue representation with a constant modulus. That was the case for objects of type G2 (an elliptic curve point) that are used in our app. Calling powmod() with a different modulus will change the global Montgomery settings and quietly break any existing instance of G2.
Cleansing himself from the effects of this sin required much meditation, but eventually clarity descended on him and he saw that MiraCL can only deal with a single modulus in the whole computation. Alas, he needed to work with different moduli and that was why The Elders sent a curse upon him. This curse was too great; the warrior had no choice but to masterfully avoid it. He locked powmod() in a chest and buried it meters underground in a stone grave. After finishing this hard labour, he seeked for a replacement. Destiny was generous with him for another function, power() turned out to be safe and powerful enough for his purpose.

Our warrior felt much lighter once this burden has been taken off his chest. Walking through the castle, with crystal water returning back to its stream, he felt in a bliss. And then he stumbled and fell down a few broken stairs. These stairs were also coming from MiraCL, they were the Big.operator+=(). Are they cursed too? They caused him to crash, only thanks to mighty Asan did he didn't suffer much pain. There were many other stairs in the castle, how come only these were so treacherous?

This time he chose to unleash the Watchdogs of LLDB on this issue and they led him right to the tapestry on the wall that was not present anywhere else. The tapestry was named otstr()
 and it displayed many numbers in hexadecimal, unfortunately it also unleashed the potential to crash in the stairs. It was a very dangerous overlook from The Elders.

It has turned out that a hashing function in code for IBE was implemented in a careless way, causing overflow of the Big type. It relied on the fact that such overflows are normally detected and avoided. Unfortunately this detection could be turned off as was done in otstr(). The otstr() function never enabled overflow checking again, an obvious bug. Watchpoints in LLDB were able to help detect places where mip->check was changed.

All of this made the warrior suspect the MiraCL, but wise as he was, he remembered similar perils with his other trusted tools as well, mastering was never an easy task.

No, I wasn't high when writing this, just a little frustrated and this felt like fun. Maybe we should write all programming blogs like this ;-)

Tuesday, September 8, 2015

Compiling openssl with emscripten

a.k.a. the days of 10kB JavaScript are gone.

We are doing some crypto app prototypes and figured that having demos on the web, without having to download or install anything are quite valuable. And despite the issues on the SSL side of OpenSSL, the crypto library is still quite useful. Let's see how to build it into JavaScript using the amazing emscripten.

I'm using openssl v1.0.2a which is commit 3df69d3aefde7671053d4e3c242b228e5d79c83f in the git repository. First I have emscripten prepare my environment for compilation to make sure I'm using the correct compiler, archiver and linker (emcc, ar, ld). I do

emmake bash

or any other shell such as fish. I wasn't able to run emmake ./Configure or emconfigure directly so I just run a new shell. From the shell I can configure openssl as usual:

./Configure -no-asm -no-apps no-ssl2 no-ssl3 no-comp no-hw no-engine no-deprecated shared no-dso --openssldir=built linux-generic32

note that 64b architecture cannot be used. I also did have to modify the generated Makefile a bit.


  1. on line 63, delete the path after $(CROSS_COMPILE) so that it looks like this:
    CC= $(CROSS_COMPILE)cc
  2. on line 64, remove the -O3 flag just to be sure because not all enscriptem optimizations may be compatible with openssl
after this you're able to build the library using

make

To test, I did build one of the demos:

emcc  demos/sign/sign.c -lcrypto  -o demos/sign/sign.html -Iinclude -L. --preload-file demos/sign@/

The resulting library is almost 4 MB, it may be useful to try and remove some more features. Now it's not really clear if crypto software running in this way is still secure. I know that browser Crypto API + enscriptem ensure that randomness in /dev/urandom is correct but I may need to dig into the debugger to be sure it's really used correctly.





Friday, August 14, 2015

Building an ethereum ÐApp, part IV: The Frontier

What is ethereum and ÐApps? Check here  or search
This is part IV of a series. Part I

Welcome to explore what's behind the Frontier!

The first real release of Ethereum is out and it mostly works! First, let's get out some updates to previous blog posts.

Some updates

  • You can now open the JavaScript console using geth attach which will connect to geth you've already started on your machine. But on Windows, this is still not working very well. A fix is underway. See more here.
  • You may want to use the eth.contract interface to create and manipulate your contract
  • Of course it's always a good idea to keep in sync with the JavaScript API reference!
  • eth.sendTransaction() now returns the tx hash. To get the contract address if you've sent some code, use eth.getTransactionReceipt(tx hash).contractAddress

An update on running a private chain

This is the commandline I use for development:

geth.exe --rpc --rpccorsdomain="*" --datadir geth_private --rpcapi "admin,db,eth,debug,miner,net,shh,txpool,personal,web3" --nodiscover --networkid 7938 --genesis private_genesis.json --solc "your/path/to/solc.exe" --unlock 0

and my genesis file is in https://github.com/Quiark/eth-devchain . Actually all you need for a private dev chain is there.

Note that:
  • the difficulty is set to 4 so that you can create blocks immediately and even the DAG is tiny
  • the command above enables ALL management APIs to the RPC which would be a totally unsafe thing to do on the livenet.
  • change your path to solc.exe (can be downloaded with the cpp-ethereum or eth++ package)
  • for fake test ether, you can either just mine or edit the genesis file to assign some balance to one account. You just need to have a private and public key for that account in advance. You can create them on the live net first.

Back to coding

I've come to the stage where I need to implement payouts in my Roboth.web3 dapp based on which user has the most upvotes. In a few words, this app lets user post a problem (a job) and ask the crowd to provide solutions. Users up/down vote solutions and after a fixed amount of time, the highest rated solution gets selected and paid the amount initially offered with the problem. There are a number of problems with that, two of them I'm going to discuss.

Timed automatic payouts

Payout to the highest rated user should occur at a certain time, ideally automatically. Ethereum by itself doesn't support auto-triggering function calls. In this case, the solution is simple: let the supposed receiver of the payout ask for it themselves. After the contract verifies he is indeed the correct receiver, it can send out the payment.
To make it even better, our centralized server or the JS application can handle this automatically so that the human does not need to think about it and can instead focus on whatever thing humans like to do. The JS side of the dapp can query our contract if user is eligible for a payout using a const function in the contract - one that only reads data and is free to execute.
I haven't implemented this in my dapp yet, wait for next blog post to see how it turns out.

Finding highest rated solution

Each solution can be up or down voted by any user, much like this happens on StackExchange. That means the top position can change dynamically. When payout time comes, we need to find the top player for that particular job. Depending on what data structure is used, this can be time consuming and time equals gas equals money. If you have all solutions in one list, finding the max is just a linear operation and could be fine if you don't expect too many of them. In my case, solutions for a single job are not located together so to find it, I would have to iterate over all solutions for all jobs which would be very costly. 
The top rated solution can be cached so that it can be retrieved immediately. Since the top solution can come to the top and leave it again when downvoted, we need to use a heap data structure to perform such changes efficiently. A heap can be implemented using a simple array so the lack of pointers in Solidity should not be an issue.
Another factor to weigh is the gas price of storage. Having too many repeated storage slots can be costly. Writing new item to storage is priced at 20k gas, reading is at 5k and deleting that item (by setting it to 0) actually refunds 10k.
Again, implementation is pending so check out my next blog post :)

Wednesday, June 24, 2015

Correct SCons variantdir and emitters

I'm using SCons to build my C++ stuff across platforms and as usual, my build config is gradually getting more complex. I always like to have build output in a separate directory, for cleanliness. I use a VariantDir command to do that. The problem is that variant dirs are always a bit tricky to understand and do properly, so here are some notes on how to avoid screwing up.

Use the Node, Luke!

Items in the SCons build tree are represented as Nodes, not only plain file names. In the case of an output into your VariantDir, the node will remember the output path (such as build/file.o) as well as the original source input path (file.o) and for both of these, it also knows the absolute path. These properties are something you'll always want to see when debugging.


print n.abspath

print n.srcnode().abspath


See the section File and Directory Nodes for specific property documentation.

Use the Emitters, Leia!

SCons is a little obsessive and really likes to keep track of everything. It likes to know what files come in and what will fall out. With this information, it can make sure everything is properly rebuilt on any change and it can nicely clean your directory with the -c switch.

If you need to call some external command, it's a good idea to provide this information to SCons so that it knows what will happen. In my build, I need to generate header files for JNI classes using javah. The built-in tool doesn't really work for me because it needs Java compilation first so I ended up writing my own.

The file and class names in Java are tightly coupled, you can pretty much just do 

file = clsname.replace('.', '/') + '.java'

to find the source file for a class. I'm using this fact to make my emitter. I take great care to have the correct .java files listed as the source for the Builder. Having only the directory just doesn't cut it, I have to Glob() in subdirs too. To have a good idea of what's happening, I first debug-print my source and target nodes in the emitter:

def emit_javah(target, source, env):
    print 'emit source', [x.abspath for x in source]
    print 'emit target orig', [x.abspath for x in target]


The emitted target node doesn't need to have an absolute path or contain the VariantDir name, that should be handled by SCons. Just imagine you are building in the same directory and return a relative path.

Thursday, May 28, 2015

Building an ethereum ÐApp, part III

What is ethereum and ÐApps? Check here  or search
This is part III of a series. Part I

Diving into the code

My simple proof-of-concept app can be seen at https://github.com/Quiark/Roboth.web3 and is based on the meteor-dapp-boilerplate project. The smart contract is called Roboth and is deployed on the (currently testing) blockchain, registered with the Global Registrar under the same name. However, I'm still working on it so be prepared to encounter a broken, invalid or a stupid deployment at any time.

Thoughts on deploying beta contract versions

Now this is clearly not a best practice to push stupid code right into the public production environment. I could register the work-in-progress update with the Registrar under a different name such as "Roboth.RC-1" and config my JS frontend to interface with this instance. Alternatively, I could run geth (the ethereum client) on a private testnet using the command line switch

geth --networkid <random number here> --maxpeers 0


or by disconnecting my wifi. It would also require me to clean my blockchain database because I would be starting from scratch effectively. In this way, I could mine all ether by myself and thus have enough for funding any experiments.

Simple Python compile & deploy script

If you prefer your cozy text editor over cool web based development environments, you may find my Python script for compilation and deployment mildly useful. It's included right there in the Roboth.web3 repository as tools/contract.py, for free without any hidden costs.

It can handle the following tasks:
  • compile contract code on your geth node (I'm using Windows and don't have a solc binary)
  • deploy compiled contract
  • register the newly deployed contract's address with the Registrar
  • remember compiled code, ABI and address so you can go back and use any earlier-deployed version in case you forgot some semi-important data there (you don't have any really-important data because otherwise you'd be using some more serious and stable software)
  • save the new ABI as JSON to a JS file automatically loaded by Meteor
  • invoke some methods of the contract after deployment so you are not testing with an empty database (must be customised for your particular contract)
  • use hard-coded file paths so you know where to put your files by reading source code (ehm)
Currently it cannot do:
To use it, you'll need to modify the code a bit, edit the geth RPC address where EthRpc is instantiated, edit your primary account in prim_acc and possibly also the contract name variable con_name. When running, current working directory must be tools (that's where the script is located). The tool currently doesn't accept commandline arguments, it must be configured by changing the code at the end of the file.

It also has some dependencies, this one and this one too.

Working with your contract from the JS app

By now you may already be rather familiar with the incantation that takes your contract's binary ABI and its blockchain address and creates a proxy object to call it. It looks like this

this. RegistrarABI = [{"constant":true,"inputs":[{"name":"_owner","........
this. RegistrarAddr = "0xc6d9d2cd449a754c494264e1809c50e34d64562b";

this. RegistrarAPI = web3.eth.contract(this.RegistrarABI);

this. Registrar = this.RegistrarAPI.at(this.RegistrarAddr);

This is required because even though we write the contract code in Solidity, it's compiled into EVM bytecode and even though we use functions, arrays and mappings, these have a different representation on the blockchain (which is also different from linear memory layout we are used with RAM). The JSON RPC we are using is operating at the low level and it doesn't really know how to call Solidity functions. But the web3.js library knows how to call it, assuming you provide the ABI description that fell out of the solidity compiler.

So in this code snippet, there's a hardcoded ABI for the official testnet registrar contract that I stole directly from geth source code, its official testnet address which I also stole from the same place. Next, the RegistrarAPI creates a class as you know it from OOP languages (if you are coming from C++ or Java, you may not believe that a single function call can create a class but yeah, dynamic languages can do that). On the last line, we instantiate this class using its static method at() and the instance will communicate with the contract on the given blockchain address.

The same procedure would be used for our own contract except that its ABI is automatically generated by the Python script and included by Meteor from client/lib/compatibility/Roboth.abi.js because it's under rapid development and thus changing all the time. Furthermore, the address is fetched from the Registrar where it's stored by the same script on each deployment. See here for yourself.

Once you have a proxy instance, you can call methods and send transactions almost the same way as in regular OOP languages. These are the 2 ways to invoke a method and it's explained in the Frontier Guide.

The simplest way ever to store a growing mapping in Solidity

Assigning some data to an user or an address in Solidity is quite easy, just use the mapping type:

mapping (address => MyData) mydatas;

What happens, however, when you want to iterate over the keys or values to display it in your app? This is not currently supported and I believe wouldn't be so easy to implement because the data layout is not linear. A simple solution is to add an integer index

mapping (uint => address) users;
uint next_user_ix;

Now we can iterate from 0 to next_user_ix and get all users in the range. Of course this requires that you maintain the index manually, adding to it each time a value is added to the original mapping. This approach is very simple but it doesn't really work well when you also need to remove values. You can see the forum post on this problem for other people's ideas.


Ethereum values data types

I recommend always storing account balances in wei as Strings or BigNumbers. Javascript doesn't handle large integers correctly and wei balances are always going to be pretty large. Furthermore, given the number of units or denominations for ether, mixing them up in the code is a really big danger. The only way to stay sane is to stick with wei, just like the JSON RPC and only convert to human-friendly in the templates (using the toEth template helper).

Similarly with addresses, they come as hex string and should stay in that format

Reacting to data from blockchain

Meteor has a neat functionality that enables auto-refreshing your HTML DOM when source data changes. It's called being reactive™. We can use this function to some extent but keep in mind that operations on the blockchain are not instant (and also not immediately reliable until all small forks are abandoned).

The most reliable way to observe changes in the blockchain is to use Solidity events and install filters from the RPC. However, if you don't have that for whatever reason, you can just keep polling every 6 seconds or so.

The class BlockchainTracker is a simple wrapper that will fire an update on its ReactiveVar when the latest block number changes. This can be observed in an autorun function to trigger a refresh from the blockchain. See UserDataManager for an example of a dataset that needs to be updated when a new item gets added. This simple solution doesn't handle updates from other users and it may miss changes that appear 2 blocks later.

Conclusion

The app is still very much in development with many rough edges but I hope people starting out with ÐApps may find these notes useful.

Friday, May 22, 2015

Building an ethereum ÐApp, pt. II

Hiccups on the way to ÐApping

What is ethereum and ÐApps? Check here  or search
This is part I of a series. Part I

Not using enough gas for transactions

You know it - you send a transaction, scratch your head but nothing happens. You wait, see blocks being crafted but your transaction is just sitting there, forgotten. When you check its status using

// 0xTRANSACTIONID is the return value from eth.sendTransaction
// or you can see it in the verbose logs of geth
eth.getTransaction('0xTRANSACTIONID').blockNumber


you get either 0 or an error. 

Make sure you are using enough gas for the operation. Each transaction in ethereum can be one of the following 3 types
  1. just value transfer (sending eth to your friend)
  2. invoke a contract (possibly with value transfer)
  3. create a contract
And each requires a different amount of gas. Excess gas is refunded so you can beef it up easily. For example to deploy my contract, I would use this call:

eth.sendTransaction({from: eth.accounts[0], data=code, gas=1800000})


Not enabling CORS for HTML5 apps

If you decided to access the geth client from a JS+HTML5 app, you may find that the web3.js module is unable to connect because of the Cross-Origin Resource Sharing restrictions in the browser. You can see it in the F12 Developer Tools console. To fix this, make sure you have launched your geth instance with the correct arguments. If your JS app is served from http://localhost:3000, it would look like this:

geth --rpc --rpcaddr="localhost" --rpcport="8545" --rpccorsdomain="http://meteor-dapp-cosmo.meteor.com http://localhost:3000"

The --rpccorsdomain argument is key, it allows these origins to access the RPC. Note that you can specify more than one domain, just separate with spaces. This commandline will allow you to run the Cosmo web app at http://meteor-dapp-cosmo.meteor.com with your local geth client.

Note that if you try to invoke the HTTP RPC requests manually, you won't get the Access-Control-Allow-Origin: header unless you add the Origin: header first.


Incorrect compilation or construction

When you pack your little contract's lunch and send it off to the cloud, it may fail to stick even though you gave him enough gas to fly all the way to the cloud and the transaction was processed. But when you execute

eth.getCode(0xCONTRACTADDR)

you get '0x' nada nothing. It will be useful to know that the EVM bytecode that you send as data is executed and the result will be the actual contract code living on the blockchain. This is how the constructors work.

If your code is corrupt or the constructor encounters a problem, you may end up in this situation. In my case, I incorrectly copied the hex contract output from the compiler.


Meteor: global variables

I tried to instantiate a contract client in my Meteor app in a template.js file like this:

RegistrarABI = [{"constant":true,"inp.....snip RegistrarAddr = "0xc6d9d2cd449a754c494264e1809c50e34d64562b"; RegistrarAPI = web3.eth.contract(RegistrarABI); Registrar = RegistrarAPI.at(RegistrarAddr);

but alas, these variables were not available in my helpers or elsewhere.

Turns out that Meteor executes template JS code in a different context and so these variables were not global. My quick&dirty solution was to attach them to the window which is global but a better approach is clearly putting that into the client/compatibility folder which is designated for "outsiders".


ADDED: Development FAQ

You can join the go-ethereum gitter channel and search, many questions have been asked there already and there are some examples / clarifications too. Just remember to not be an ass and try to search a bit first before distracting the devs.

Friday, May 15, 2015

Building an ethereum ÐApp, pt. I

What is ethereum and ÐApps? Check here  or search
This is part I of a series. Part II
This article is from May 2015, check the update in Part IV

This is a collection of notes as I was going through writing a simple proof of concept ÐApp, hopefully it would prove useful to others and reduce their bleeding when working with such cutting-edge technology. I hope to address also some practical concerns which are beyond the scope of other basic tutorials.

I first met the ethereum project on a meetup in Hong Kong in summer 2014 where Vitalik Buterin presented the project himself. Since then, I've been watching it and growing more interested and after realising its potential (through Vitalik's posts on blog.ethereum.org) I got so excited that I went ahead and started writing my own ÐApp, like a true hacker nerd.

ÐApp components, structure

The most important part is, of course, the ethereum client (currently go-ethereum or geth). It is also called a node because it connects with other nodes to form the network (nodes usually also run mining) and you may also think of it as a wallet (in the bitcoin sense) because it keeps your private key and allows you to send transactions. As such, users of your ÐApp will either need to run their own ethereum client or use a web based service (a parallel to https://blockchain.info/wallet) but that means increased centralisation and having to trust that service.

Another part of any app is the GUI. That's something you'll be building yourself. You can go ahead and use any of the old boring GUI frameworks such as Qt, HTML5, Android as long as you know how to connect to the wallet of your user. The connection happens over HTTP JSON-RPC (documentation here) which means that even an JS/HTML5 GUI served over the web can still connect to a wallet on the localhost.

Since storage and processing on ethereum blockchain is not so cheap, you may also want to run your centralised server in the old fashioned way, such as Node.js on Amazon EC2 or a Haskell server on your Commodore 64 in your grandma's basement (that would be slower than the ethereum blockchain actually, but equal in coolness factor). This server would handle data that doesn't need to be protected by the blockchain. Remember, the point of blockchain is to have a global consensus on sensitive data (such as people's account balances, domain name registrations) and making sure they are not modified behind anyone's back. Other, more trivial or sizeable data for your app, however, can be stored outside the blockchain, for example uploaded files / pictures / videos. Your server may need to run the ethereum client too, to have access to the latest blockchain state.

Installing and running geth

There's not much to say here, just follow the homepage https://github.com/ethereum/go-ethereum. I recommend just downloading the binaries, they are built automatically for Windows, OSX, Ubuntu. Currently, go ahead with the develop branch. Don't bother with the Mist user interface, geth is all you need (also love). I also recommend going through the Frontier Guide that is being collected at http://ethereum.gitbooks.io/frontier-guide and trying out the examples to understand more.

If, upon starting geth, you can't connect to any peers, try to start with 

geth --vmodule=udp=6,server=6,downloader=6 console

to get extended logging for the network. Make sure your computer clock is correct. Note that the message about no UPnP device found means that it couldn't setup port forwarding on your router using UPnP automatically. No big deal.

Getting a Meteor app skeleton

For this project GUI+centralised server, I chose the Meteor webapp client and server framework because it's new, hip, cool and everybody else seems to be using it too. Hopefully it'll make me look cool too. If you've never heard about it, let me summarize it as a batteries-included, everything-prepared framework that bundles Node.js, MongoDB, reactive (autoupdating) templating engine and other tools to make building and deploying webapps really easy. Both server and client code are written in Javascript. You may want to go through the tutorial to get some idea on how things are working there. 

I started by installing meteor from the homepage at https://www.meteor.com and then cloning this useful repo https://github.com/SilentCicero/meteor-dapp-boilerplate as the basic template. Meteor is by default running on port 3000. Then, start geth with JSON RPC enabled (default port is 8545) and allow CORS so that your Meteor app can access it from the browser:

geth --rpc --rpccorsdomain "http://localhost:3000" console 2> geth_stderr.txt

Note: do NOT use --rpcaddr "0.0.0.0" or you'll lose money. Also, enable firewall to prevent access to your node from the outside.

To see log messages, watch the file geth_stderr.txt, ideally using tail -f geth_stderr.txt. You'll still need to interact with the client in the JS console it provides, don't forget to check out the documentation.

I can already feel your head exploding from the overflow of information in this and the linked articles. Let's wrap it now, have a good sleep and next continue with some actual code, perhaps even with some troubleshooting tips (for free!). I'm planning to put my little ÐApp on github as well, sometime very soon. 

Monday, April 27, 2015

Sabah impressions

I'm a nature nut. Trekking in a real, old rainforest has always been something I wanted to do, ideally spotting some wild animals too. That's why I went to Sabah, Borneo last week.

First thing I found was that a big part of the place was either already developed (into a town) or occupied by agriculture (palm oil). Of course the people need to live somehow, but it also means that the days when an orangutan could swing from tree to tree all across Borneo are gone.

After spending one night in Sandakan, I went to Kinabatangan river on an organised tour. It is said that Kinabatangan river is the best place to spot wildlife and I was fairly lucky. It really paid off to buy binoculars, that was one of my brain's brighter moments. One of the first things we saw was a wild orangutan, at a distance. Seeing this is very rare nowadays and I could well be one of the last people who had such opportunity. We also saw a couple of other monkey species, the way they can jump from one tree to another is quite amazing, imagine if you had to do that...

A night walk in the forest around our accommodation gave us a chance to see a few sleepy birds (so sleepy, in fact, that they didn't even care about flashlight or camera flash), insects and a deadly yellow viper. Leeches, which I expected to show up without fault, did not actually arrive. Maybe they don't like my kind of blood.



The orangutan I adopted, Gellison. Well isn't he like me? ;)
Next day I went to Sepilok Orangutan Conservation Centre and saw a bunch of orange flurry apes swinging their way to grab some free bananas during feeding time. On the way out, there was a green pit viper, the kind that can see heat as well as ordinary light. The Conservation Centre is actively trying to help orphaned orangutans to survive and return them back to the wild. And since I love the nature and would like them to thrive again, I made a donation/adopted one. Next to the SORC, there is also a Rainforest Discovery Center where people can have a light trek in a limited area of real primary rainforest. That is also pretty amazing, mostly due to the huge trees which form different levels in the forest for different kinds of animals.

Troubleshooting USB driver in Windows

Disclaimer: techniques described in this article are not supported by me or Microsoft. They are very likely to break your system and make it unusable, so anything you try is at your own risk and assumes that you are able and willing to fix it yourself. That should be pretty obvious anyway.


I still use Windows 7 and recently it has been a bit like being the last survivor in an abandoned city when most other people have left to OS X or the Ubuntu village. And yes, things break or stop working or simply get stuck. The reason is probably that I overload my system with tons upon tons of programs and libraries (can't even count how many programming environments I have set up) and being in a state of general messiness.

The most recent problem I had was the mouse not working after wakeup. Something inside the system was stuck because some system calls seemed to be taking their time.

First thing I thought of was how to restart the PnP hardware service or subsystem. That turned out not really possible but at least I set the ShellHWDetection service to run in its own process instead of sharing a process with other services. This allows me to restart it in case of problems. To do that, use this command in an elevated prompt:

sc.exe config ShellHWDetection type= own

(keep the space after type=, takes effect upon reboot).

You can see services running in each process using the excellent Process Explorer from www.sysinternals.com

I also noticed that I have VMware USB Arbitration Service running and that's clearly a good target for troubleshooting my mouse problem. Since I use VirtualBox, not VMware on my machine, having this service is a bit redundant (it probably comes with vSphere client that I use to administer our ESXi server). Disabling it alone, however, is not enough.

After searching a bit, I was able to find the list of hidden non-PnP drivers in the Device Manager. To get there, right click on your computer on the desktop and select Manage... Then in the list on the left, select Device Manager and finally enable Show hidden devices in the View menu.

This list by itself is pretty interesting on its own but for my problem, I found that I have VMware hcmon installed. Given that I only use vSphere client and having issues with the mouse, I decided to disable it and see if it helps. I also disabled the VirtualBox USB Monitor Driver. Of course I won't be able to use mouse in my VMs now but if this fixes my problem, it'll be an interesting discovery. Some day I should try to disable one service after another in a VM to see how does it crash the system.

So far it seems it has helped :)
Additionally, there's a command that can list all drivers on your system. You can use it to find drivers not signed by Microsoft which could be a cause of trouble. It's

driverquery.exe /fo csv /v


in combination with sigcheck.exe again from SysInternals as described here http://serverfault.com/questions/130042/is-there-a-command-line-equivalent-of-sigverif-exe