Tim Mattison

Hardcore tech

Mockito and ServletInputStreams

| Comments

I was working on a few applications that involve servlets recently and I came across a situation that initially seemed challenging to test with Mockito. I wanted to do something relatively simple which was read a Protobuf sent from a client, turn it into an object, and do some processing on it.

The question is how do you test a servlet that needs to get the input stream from a servlet request?

I found a Stack Overflow post that addresses how to do this with an older version of the ServletInputStream but doing it now requires that you override three additional methods (isFinished, isReady, and setReadListener).

My issue with this is that I don’t want to override those methods because I don’t really know what I want them to do. If I’m mocking something I want to make sure I know when and where it will be used or I want the mocking framework to return default values or throw exceptions so I know where to look when something breaks.

What I landed on was using the thenAnswer method like this:

Mocking a ServletInputStream
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
byte[] myBinaryData = "TEST".getBytes();
ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(myBinaryData);
      
mockServletInputStream = mock(ServletInputStream.class);

when(mockServletInputStream.read(Matchers.<byte[]>any(), anyInt(), anyInt())).thenAnswer(new Answer<Integer>() {
    @Override
    public Integer answer(InvocationOnMock invocationOnMock) throws Throwable {
        Object[] args = invocationOnMock.getArguments();
        byte[] output = (byte[]) args[0];
        int offset = (int) args[1];
        int length = (int) args[2];
        return byteArrayInputStream.read(output, offset, length);
    }
});

If you need to ever mock a ServletInputStream feel free to use this code to do it. So far it has worked perfectly for me.

Fixing Javac on Mac OS When Multiple JVMs Are Installed

| Comments

For some reason I decided to install the Java 8 JDK a few days ago when I upgraded to Yosemite. In IntelliJ it isn’t a problem but on the command-line it isn’t so nice. Here’s what I get when I try to use javac:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
$ javac src/com/timmattison/Main.java
warning: /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/lib/ct.sym(META-INF/sym/rt.jar/java/lang/Object.class): major version 52 is newer than 51, the highest major version supported by this compiler.
It is recommended that the compiler be upgraded.
warning: /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/lib/ct.sym(META-INF/sym/rt.jar/java/lang/String.class): major version 52 is newer than 51, the highest major version supported by this compiler.
It is recommended that the compiler be upgraded.
/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/lib/ct.sym(META-INF/sym/rt.jar/java/lang/Object.class): warning: Cannot find annotation method 'value()' in type 'Profile+Annotation': class file for jdk.Profile+Annotation not found
/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/lib/ct.sym(META-INF/sym/rt.jar/java/lang/String.class): warning: Cannot find annotation method 'value()' in type 'Profile+Annotation'
warning: /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/lib/ct.sym(META-INF/sym/rt.jar/java/lang/AutoCloseable.class): major version 52 is newer than 51, the highest major version supported by this compiler.
It is recommended that the compiler be upgraded.
/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/lib/ct.sym(META-INF/sym/rt.jar/java/lang/AutoCloseable.class): warning: Cannot find annotation method 'value()' in type 'Profile+Annotation'
warning: /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/lib/ct.sym(META-INF/sym/rt.jar/java/lang/System.class): major version 52 is newer than 51, the highest major version supported by this compiler.
It is recommended that the compiler be upgraded.
/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/lib/ct.sym(META-INF/sym/rt.jar/java/lang/System.class): warning: Cannot find annotation method 'value()' in type 'Profile+Annotation'
warning: /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/lib/ct.sym(META-INF/sym/rt.jar/java/io/PrintStream.class): major version 52 is newer than 51, the highest major version supported by this compiler.
It is recommended that the compiler be upgraded.
/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/lib/ct.sym(META-INF/sym/rt.jar/java/io/PrintStream.class): warning: Cannot find annotation method 'value()' in type 'Profile+Annotation'
warning: /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/lib/ct.sym(META-INF/sym/rt.jar/java/io/FilterOutputStream.class): major version 52 is newer than 51, the highest major version supported by this compiler.
It is recommended that the compiler be upgraded.
/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/lib/ct.sym(META-INF/sym/rt.jar/java/io/FilterOutputStream.class): warning: Cannot find annotation method 'value()' in type 'Profile+Annotation'
warning: /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/lib/ct.sym(META-INF/sym/rt.jar/java/io/OutputStream.class): major version 52 is newer than 51, the highest major version supported by this compiler.
It is recommended that the compiler be upgraded.

When I run javac -version I get mostly what I’d expect:

1
2
$ javac -version
javac 1.7.0_45

So why is it trying to use libraries from the Java 8 JDK? Simply because I forgot to set JAVA_HOME. On Mac OS you can quickly fix this by adding the following line to your .bash_profile and starting a new Terminal session:

1
export JAVA_HOME="/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/"

Of course you should change _45 to reflect the specific version you’re running and validate that the path in the JAVA_HOME variable exists.

Good luck!

When Unicode Goes Wrong in Java

| Comments

NOTE: This is only guaranteed to work with the Sun JVM since this option is “an internal detail of Sun’s implementations”.

UPDATE 2014-11-12 6:22 PM: The real fix is to set the environment variable LANG to en_US.UTF-8 right before you start your JVM.

Is Unicode breaking in your application and you can’t figure out where? Maybe data from HttpClient is coming back mangled, maybe database queries via JDBC are having Unicode data replaced with question marks, maybe your protobufs are getting shredded, but somewhere something is eating your Unicode data and nothing you’ve tried fixes it. Well…

Did you know the JVM itself has a global Unicode setting specified by the -Dfile.encoding option? Most people I talked to didn’t know about it, myself included, when I ran into a Unicode issue on a project. After some great teamwork and research we found this option, set it, and everything started working again.

All we had to do was put -Dfile.encoding=UTF8 in the script that ran our JVM and everything was fixed but that was only a temporary fix. You really need to set LANG to en_US.UTF-8. If you want to play with it I created a test project on Github that is incredibly simple and shows the right and wrong settings and what they do to a simple trademark symbol. Otherwise, try this on your project and see if it fixes the issue.

Good luck!

Using Interfaces in Camel’s Java DSL With Spring

| Comments

When writing some routes with Camel’s Java DSL I came across this exception:

1
Caused by: java.lang.IllegalStateException: No method invocation could be created, no matching method could be found on: null

After a lot of tracing I figured out that it was related to me calling the .bean(...) method with a class that was actually just an interface. What was happening was that Camel wants to instantiate this class, usually using Spring, but cannot do that if it isn’t a concrete implementation.

This proved to be a real problem because I had an interface that had two implementations. One of these implementations is used for debugging and the other is used for production. I didn’t want to have to manually select which one I was using in my code because that’s Spring’s job so I came up with a way to do it.

For the complete background here’s what my interface looks like:

The interface
1
2
3
4
import org.apache.camel.Processor;

public interface ProtobufToWire extends Processor {
}

This converts a Protobuf to our “wire” format. That format could be the native protobuf binary format or JSON. I implement this empty interface in two classes called ProtobufToBinary and ProtobufToJson and I want to use the JSON one only for debugging.

To be clear doing this always fails with the exception I listed above:

A route that always fails
1
from(SOME_URI).bean(ProtobufToWire.class);

To fix this I added this to my Java-based Spring config:

Getting an instance of ProtobufToWire
1
2
3
4
@Bean
public ProtobufToWire protobufToWire() {
    return new ProtobufToBinary();
}

Now because, I believe, that Camel’s bean(...) method doesn’t look up the beans with Spring this still fails. What I needed to where I am defining my routes is this:

Finally, how to get Camel to instantiate the right type
1
2
3
4
5
6
7
    @Autowired
    private ProtobufToWire protobufToWire;

    @Override
    public void configure() {
      from(SOME_URI).bean(protobufToWire.getClass());
    }

What I’m doing here is getting Spring to autowire an instance of that interface into a private variable and then asking it for its real concrete type. Part of me says that I shouldn’t have to do this but this is what works for me.

Did this help you out? Do you have a better way to do it? Post in the comments!

Activating U2F on a Yubikey Neo on Mac OS

| Comments

I just got my Yubikey Neo with U2F support and I felt like the documentation on how to get it up and running was a bit hard to find. If you are having trouble getting started with U2F these few quick steps will help you get through it.

Step 0: Download and install the Yubikey Neo Manager application. This is NOT the Yubikey Personalization Tool! The Yubikey Personalization Tool does not support enabling U2F yet.

Step 1: Open the Yubikey Neo Manager with your Yubikey installed and click Change connection mode [OTP] Yubikey Neo Manager main screen

Step 2: On the Change connection mode check the U2F box to change the setting from OTP to U2F and click OK. Yubikey Neo Manager main screen Yubikey Neo Manager main screen

The application will now prompt you to remove your device. You can remove it and plug it back in again. Close the Yubikey Neo Manager application.

Step 3: Open Chrome and install the FIDO U2F (Universal 2nd Factor) extension from the Chrome web store.

Step 4: Register on Yubico’s U2F demo page and you’re done.

Now you can log in on the demo page and other sites that support U2F.

Building Apache Camel Applications With Guice

| Comments

Apache Camel is a great framework for implementing Enterprise Integration Patterns. However, most of the examples you’ll find out there show you how to use it with the Spring framework. I’m much more comfortable with Google Guice since I’ve used it in more production projects.

I did find an example of how to use Guice with Apache Camel but it wasn’t commented well and involved doing a lot of extra work that didn’t provide me any benefits. So below I’ve listed the things that you’ll need to do to get Guice and Camel working together. What we are doing here is setting up Guice as a JNDI provider and automatically loading a Guice CamelModule via JNDI.

Step 1: Create a jndi.properties file in your project’s resources directory. The java.naming.factory.initial line tells JNDI to use Guice, the org.guiceyfruit.modules tells the javax.naming.InitialContext class which module it should run at startup.

jndi.properties
1
2
3
4
5
# Guice JNDI provider
java.naming.factory.initial = org.apache.camel.guice.jndi.GuiceInitialContextFactory

# list of guice modules to boot up (space separated)
org.guiceyfruit.modules = com.timmattison.CamelGuiceApplicationModule

Step 2: Create a class with a static main method that will run your Camel routes. Because JNDI and Guice do most of the work there isn’t much to do here.

com.timmattison.CamelApplication
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
package com.timmattison;

import javax.naming.InitialContext;

/**
 * Created by timmattison on 10/27/14.
 */
public class CamelApplication {
    public static void main(String[] args) throws Exception {
        // Create the Camel context with Guice
        InitialContext context = new InitialContext();

        // Loop forever
        while (true) {
            // Sleep so we don't kill the CPU
            Thread.sleep(10000);
        }
    }
}

Step 3: Create a class that extends RouteBuilder that implements a route (or multiple routes).

In my case I created a RestRoutes class that used the RESTlet framework and created a single route using the Direct component.

I moved the constants out to separate classes so they’d be easier to refer to in other places if necessary.

com.timmattison.CamelConstants
1
2
3
4
5
6
7
8
package com.timmattison;

/**
 * Created by timmattison on 10/27/14.
 */
public class CamelConstants {
    public static final String DIRECT_TEST_ROUTE_1 = "direct:testRoute1";
}
com.timmattison.HttpConstants
1
2
3
4
5
6
7
8
9
10
package com.timmattison;

/**
 * Created by timmattison on 10/27/14.
 */
public class HttpConstants {
    public static final String TEST_URL_1 = "/test1";
    public static final String TEST_URL_2 = "/test2";
    public static final String TEST_URL_3 = "/test3";
}
com.timmattison.RestRoutes
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
package com.timmattison;

import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.model.rest.RestBindingMode;

/**
 * Created by timmattison on 10/27/14.
 */
public class RestRoutes extends RouteBuilder {
    public static final String RESTLET = "restlet";
    public static final int PORT = 8000;

    @Override
    public void configure() throws Exception {
        restConfiguration().bindingMode(RestBindingMode.auto).component(RESTLET).port(PORT);

        rest(HttpConstants.TEST_URL_1)
                .get().to(CamelConstants.DIRECT_TEST_ROUTE_1);
    }
}

Step 4: Create the interfaces and the implementations that we’re going to use in our route.

Here we’re creating four things:

  1. The interface that we’re implementing that handles the route (SayHello1) that gets injected with Guice via JNDI. This interface doesn’t do anything other than give Guice a way to reference implementations of it.
  2. An implementation of that interface (BasicSayHello1). Also, BasicSayHello1 is going to have a dependency that we want injected with Guice to make the example more complete.
  3. The interface for the class that we want Guice to inject (MessageHandler)
  4. The implementation that gets injected (BasicMessageHandler)
com.timmattison.jndibeans.interfaces.SayHello1
1
2
3
4
5
6
7
8
9
package com.timmattison.jndibeans.interfaces;

import org.apache.camel.Processor;

/**
 * Created by timmattison on 10/27/14.
 */
public interface SayHello1 extends Processor {
}
com.timmattison.jndibeans.BasicSayHello1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
package com.timmattison.jndibeans;

import com.timmattison.jndibeans.interfaces.SayHello1;
import com.timmattison.nonjndibeans.interfaces.MessageHandler;
import org.apache.camel.Exchange;

import javax.inject.Inject;

/**
 * Created by timmattison on 10/27/14.
 */
public class BasicSayHello1 implements SayHello1 {
    private final MessageHandler messageHandler;

    @Inject
    public BasicSayHello1(MessageHandler messageHandler) {
        this.messageHandler = messageHandler;
    }

    @Override
    public void process(Exchange exchange) throws Exception {
        exchange.getOut().setBody(messageHandler.getMessage(getClass().getName()));
    }
}
com.timmattison.nonjndibeans.interfaces.MessageHandler
1
2
3
4
5
6
7
8
package com.timmattison.nonjndibeans.interfaces;

/**
 * Created by timmattison on 10/28/14.
 */
public interface MessageHandler {
    public String getMessage(String input);
}
com.timmattison.nonjndibeans.BasicMessageHandler
1
2
3
4
5
6
7
8
9
10
11
12
13
package com.timmattison.nonjndibeans;

import com.timmattison.nonjndibeans.interfaces.MessageHandler;

/**
 * Created by timmattison on 10/28/14.
 */
public class BasicMessageHandler implements MessageHandler {
    @Override
    public String getMessage(String input) {
        return "Hello " + input + "!";
    }
}

Step 5: Create the direct route that handles the route from RestRoutes

com.timmattison.DirectTestRoutes
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
package com.timmattison;

import com.timmattison.jndibeans.interfaces.SayHello1;
import com.timmattison.jndibeans.interfaces.SayHello2;
import com.timmattison.jndibeans.interfaces.SayHello3;
import org.apache.camel.builder.RouteBuilder;

/**
 * Created by timmattison on 10/27/14.
 */
public class DirectTestRoutes extends RouteBuilder {
    @Override
    public void configure() throws Exception {
        from(CamelConstants.DIRECT_TEST_ROUTE_1)
                .beanRef(SayHello1.class.getName());
    }
}

Step 6: Create a Guice module that extends CamelModuleWithMatchingRoutes. I bound my SayHello1 interface to BasicSayHello1, MessageHandler to BasicMessageHandler, and included my RestRoutes and DirectTestRoutes.

com.timmattison.CamelGuiceApplicationModule
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
package com.timmattison;

import com.timmattison.jndibeans.BasicSayHello1;
import com.timmattison.jndibeans.interfaces.SayHello1;
import com.timmattison.nonjndibeans.BasicMessageHandler;
import com.timmattison.nonjndibeans.interfaces.MessageHandler;
import org.apache.camel.guice.CamelModuleWithMatchingRoutes;

/**
 * Created by timmattison on 10/27/14.
 */
public class CamelGuiceApplicationModule extends CamelModuleWithMatchingRoutes {
    @Override
    protected void configure() {
        super.configure();

        bind(SayHello1.class).to(BasicSayHello1.class);

        bind(MessageHandler.class).to(BasicMessageHandler.class);

        bind(RestRoutes.class);
        bind(DirectTestRoutes.class);
    }
}

Now if you don’t want Guice to handle any external JNDI bindings then you’re done. You can run this application as-is and it will serve up the RESTlet route. You can test it by using cURL like this:

1
2
$ curl http://localhost:8000/test1
Hello com.timmattison.jndibeans.BasicSayHello1!

If you want to have Guice handle JNDI bindings you can easily add those into your module. For example, if I wanted to be able to get an instance of SayHello1 by using the JNDI name sayHello1FromGuice I could add this to my module:

1
2
3
4
5
    @Provides
    @JndiBind("sayHello1FromGuice")
    SayHello1 sayHello1FromGuice(Injector injector) {
        return injector.getInstance(SayHello1.class);
    }

This tells JNDI that our Guice provider will handle any JNDI requests for this name. Luckily, we didn’t have to create any of these manually because Guice automatically creates JNDI bindings for anything that you’ve called bind on using its class name.

For example there is an automatic JNDI binding for com.timmattison.jndibeans.interfaces.SayHello1 because we called bind(SayHello1.class).to(BasicSayHello1.class). If we ever want an instance of whatever Guice has bound to this we can ask JNDI for it using SayHello1.class.getName().

You’ll notice that in our DirectTestRoutes class we routed the direct test route to beanRef with the parameter SayHello1.class.getName(). That’s all you need to do as you add more classes to your Camel routes.

Want to try this out without building everything from scratch? Head over to my apache-camel-guice repo on Github.

Good luck! Don’t forget to post in the comments!

Hacking Together a Super Simple Webserver With Netcat on a Raspberry Pi

| Comments

A few months ago I wanted to get some data out of WeatherGoose II Climate Monitor so I could convert it into JSON and consume it in another application. I hacked something together and converted their format to JSON in a few hours as a proof-of-concept and the code sat for a few months.

A co-worker recently asked me if they could hook up to my script with a browser to try to do some visualization. I didn’t want to install Apache or nginx as a front end and I didn’t want to modify the script to run its own webserver so I came up with a one-liner that uses netcat to get the output of my script into their browser.

But wait! netcat has an option for this. However, on the Raspberry Pi it is not available and I didn’t want to start downloading new versions.

Here it is:

1
SCRIPT="./weathergoose.py 192.168.1.99" && PORT="8080" && while true; do $SCRIPT | nc -l -p $PORT; done

You’ll need to set SCRIPT to the script you want to run (including any parameters it needs) and PORT to the port you want to listen on.

Be careful! This is not a real webserver. This just spits your scripts output back to the browser. Anything the browser sends to the script is ignored.

Also, the script runs first and pipes its output to netcat. This happens before netcat accepts a connection and can cause some confusion. Here’s a concrete example.

Assume I wrote a script that just returns the time. If I use the above snippet and start it at 5:00 PM but I hit it with my web browser at 5:15 PM the time that I get back will be 5:00 PM. The next time I hit it it will be 5:15 PM. The easiest way to think about it is that you get the data from when the script started or at the time of the previous request, whichever is later.

I hope to come up with a fix for this issue but I haven’t had the time yet. Do you have a fix? Does this work for you? Post in the comments below.

The First Bitcoin Transaction

| Comments

Want to understand how Bitcoin transactions work? Follow my next couple of posts for step-by-step explanations of what is going on behind the scenes.

NOTE: These posts are going to be extremely technical.

In this post I’m going to explain the very first Bitcoin transaction in excruciating detail. The first Bitcoin transaction is not the first block ever mined. The first Bitcoin transaction occurred in block 170 when the first Bitcoins were transferred from one address to another.

Each Bitcoin block contains transactions. The first transaction is called the coinbase and is a transaction that actually mines/creates new Bitcoins. All transactions after that are some kind of balance transfer from a set of addresses to another set of addresses.

Here is the first, non-mining transaction from block 170:

Version number: 01000000

Input counter: 01
Input script #0: c997a5e56e104102fa209c6a852dd90660a20b2d9c352423edce25857fcd3704000000004847304402204e45e16932b8af514961a1d3a1a25fdf3f4f7732e9d624c6c61548ab5fb8cd410220181522ec8eca07de4860a4acdd12909d831cc56cbbac4622082221a8768d1d090147304402204e45e16932b8af514961a1d3a1a25fdf3f4f7732e9d624c6c61548ab5fb8cd410220181522ec8eca07de4860a4acdd12909d831cc56cbbac4622082221a8768d1d0901ffffffff

Output counter: 02
Output script #0: 00ca9a3b00000000434104ae1a62fe09c5f51b13905f07f06b99a2f7159b2225f374cd378d71302fa28414e7aab37397f554a7df5f142c21c1b7303b8a0626f1baded5c72a704f7e6cd84cac
Output script #1: 00286bee0000000043410411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3ac00000000

Version number is the little endian representation of the version number of this transaction. Future transactions in a different format could have a different version number so they can be processed in new ways.

Input counter tells us that we should expect 1 input.

Input script #0 contains all the bytes in our input script.

Output coutner tells us that we should expect 2 outputs.

Output scripts #0 and #1 contain all the bytes in the two output scripts in this transaction. These outputs show where the Bitcoins are going. In this case they’re being sent to two different addresses.

Let’s look at the input first.

Previous transaction hash: c997a5e56e104102fa209c6a852dd90660a20b2d9c352423edce25857fcd3704
Previous output index: 00000000
Length: 0x48
VIRTUAL_OP_PUSH: 0x47
Bytes to push: 304402204e45e16932b8af514961a1d3a1a25fdf3f4f7732e9d624c6c61548ab5fb8cd410220181522ec8eca07de4860a4acdd12909d831cc56cbbac4622082221a8768d1d0901
Sequence number: ffffffff

Previous transaction hash tells us where to find the transaction that this input is working on. This transaction hash refers to the coinbase in block 9 which mined 50 BTC.

Previous output index tells us which output script in the transaction we should apply this input script to. In this transaction in block 9 there was only one output.

Length tells us the number of bytes that are coming up in our input script. In this case it is 0x48 or 72 bytes.

Now we’re at the actual input script. This input script consists of a single operation (VIRTUAL_OP_PUSH) which pushes a 71 byte value onto the stack. That 71 byte value is a signature that signs the previous output and the new output so that we make sure know that the person unlocking the coins is the same person spending the coins.

Bitcoin uses the ECC curve secp256k1 which is part of the SEC 2: Recommended Elliptic Curve Domain Parameters. Therefore all signing and validation operations are performed with the parameters from this curve.

The really interesting part is how we do the transaction validation. That requires a lot of explanation… as if this wasn’t long and complicated enough already.

Let’s look at the output script from block 9:

VIRTUAL_OP_PUSH: 0x41
Bytes to push: 0411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3
OP_CHECKSIG: 0xac

This script pushes a value onto the stack (which happens to be a public key) and the calls OP_CHECKSIG. This is called a pay-to-pubkey transaction. Simply it says that anyone who can create a signed transaction with a certain public key can spend this output.

OP_CHECKSIG does four things:

  1. Pops a value off of the stack and calls it the public key
  2. Pops a value off of the stack and calls it the signature
  3. Grabs data from the previous transaction and the current transaction and combines it in a particular way
  4. Computes and checks that the data from step #3 matches the public key and signature from steps #1 and #2

Now we concatenate the input and output scripts into one larger script and get this:

VIRTUAL_OP_PUSH - 71 bytes: 0x47
Signature from block 170: 304402204e45e16932b8af514961a1d3a1a25fdf3f4f7732e9d624c6c61548ab5fb8cd410220181522ec8eca07de4860a4acdd12909d831cc56cbbac4622082221a8768d1d0901

VIRTUAL_OP_PUSH - 65 bytes: 0x41
Public key from block 9: 0411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3

OP_CHECKSIG: ac

This is the most straightforward part of the process. We are pushing the data from the input script from block 170 and then pushing the data from the output script from block 9 and executing OP_CHECKSIG. This ordering makes sure that the person that originally had the Bitcoins maintains control over the final execution. Otherwise it would be possible for an attacker to just dump everything off of the stack except for a final value of 1 which would unlock the coins.

When the Bitcoin state machine sees OP_CHECKSIG then the real work begins.

From above we know we pop the public key off of the stack and then pop the signature off of the stack. Now we need to understand step #3 where we find the data that we’re checking the signature of.

Step 1 – Get a copy of the previous transaction script data/output transaction script data (VIRTUAL_OP_PUSH and OP_CHECKSIG) which will be

VIRTUAL_OP_PUSH: 0x41
Bytes to push: 0411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3
OP_CHECKSIG: 0xac

We will refer to this as our “new input script”.

Step 2 – Get a copy of the current transaction (again, from block 170)

Current transaction: 0100000001c997a5e56e104102fa209c6a852dd90660a20b2d9c352423edce25857fcd3704000000004847304402204e45e16932b8af514961a1d3a1a25fdf3f4f7732e9d624c6c61548ab5fb8cd410220181522ec8eca07de4860a4acdd12909d831cc56cbbac4622082221a8768d1d0901ffffffff0200ca9a3b00000000434104ae1a62fe09c5f51b13905f07f06b99a2f7159b2225f374cd378d71302fa28414e7aab37397f554a7df5f142c21c1b7303b8a0626f1baded5c72a704f7e6cd84cac00286bee0000000043410411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3ac00000000

Step 3 – Clear out all of the inputs’ script data from this transaction

Before:

:0100000001c997a5e56e104102fa209c6a852dd90660a20b2d9c352423edce25857fcd370400000000
Section to remove: 4847304402204e45e16932b8af514961a1d3a1a25fdf3f4f7732e9d624c6c61548ab5fb8cd410220181522ec8eca07de4860a4acdd12909d831cc56cbbac4622082221a8768d1d0901
:ffffffff0200ca9a3b00000000434104ae1a62fe09c5f51b13905f07f06b99a2f7159b2225f374cd378d71302fa28414e7aab37397f554a7df5f142c21c1b7303b8a0626f1baded5c72a704f7e6cd84cac00286bee0000000043410411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3ac00000000

After:

:0100000001c997a5e56e104102fa209c6a852dd90660a20b2d9c352423edce25857fcd370400000000
NULL placeholder: 00
:ffffffff0200ca9a3b00000000434104ae1a62fe09c5f51b13905f07f06b99a2f7159b2225f374cd378d71302fa28414e7aab37397f554a7df5f142c21c1b7303b8a0626f1baded5c72a704f7e6cd84cac00286bee0000000043410411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3ac00000000

The “after” block translates to:

Version number: 01000000
Input counter: 01

Remaining data from input #0: c997a5e56e104102fa209c6a852dd90660a20b2d9c352423edce25857fcd37040000000000ffffffff

Output counter: 02
Output script #0: 00ca9a3b00000000434104ae1a62fe09c5f51b13905f07f06b99a2f7159b2225f374cd378d71302fa28414e7aab37397f554a7df5f142c21c1b7303b8a0626f1baded5c72a704f7e6cd84cac

Output script #1: 00286bee0000000043410411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3ac00000000

Step 4 – Now remove all of the OP_CODESEPARATORS from the new input script. In block 170 there aren’t any of them so the new input script doesn’t change.

Step 5 – Put the new input script into the signing data at the current input number. In step #3 this means the new input script goes where the NULL placeholder was. This yields:

:0100000001c997a5e56e104102fa209c6a852dd90660a20b2d9c352423edce25857fcd37040000000043410411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3acffffffff0200ca9a3b00000000434104ae1a62fe09c5f51b13905f07f06b99a2f7159b2225f374cd378d71302fa28414e7aab37397f554a7df5f142c21c1b7303b8a0626f1baded5c72a704f7e6cd84cac00286bee0000000043410411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3ac00000000

Which translates to:

Version number: 01000000
Input counter: 01
Previous transaction hash: c997a5e56e104102fa209c6a852dd90660a20b2d9c352423edce25857fcd3704
Previous output index: 00000000
Input script length: 0x43
VIRTUAL_OP_PUSH Input #0: 0x41
Bytes to push Input #0: 0411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3
OP_CHECKSIG Input #0: 0xac
Sequence number: ffffffff

Value bytes: 00ca9a3b00000000
Output script length: 0x43
VIRTUAL_OP_PUSH Output #0: 0x41
Bytes to push Output #0: 04ae1a62fe09c5f51b13905f07f06b99a2f7159b2225f374cd378d71302fa28414e7aab37397f554a7df5f142c21c1b7303b8a0626f1baded5c72a704f7e6cd84c
OP_CHECKSIG Output #0: 0xac

Value bytes: 00286bee00000000
Output script length: 0x43
VIRTUAL_OP_PUSH Output #1: 0x41
Bytes to push Output #1: 0411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3
OP_CHECKSIG Output #1: 0xac
Lock time: 00000000

The value bytes represent the number of Satoshi (1 / 100,000,000th of a Bitcoin) are being transferred. The input was a mined block that created 50 BTC and the two output blocks are getting 10 and 40 BTC, respectively. Like all other value types in Bitcoin these values are little-endian.

Step 6 – Add the 32-bit little endian representation of the hash type onto the end of the signing data. The hash type is the last byte of the signature which is 0x01 in this case. Expanded into a 32-bit little endian value makes it 0x01000000. So our final data that needs to be signed is:

:0100000001c997a5e56e104102fa209c6a852dd90660a20b2d9c352423edce25857fcd37040000000043410411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3acffffffff0200ca9a3b00000000434104ae1a62fe09c5f51b13905f07f06b99a2f7159b2225f374cd378d71302fa28414e7aab37397f554a7df5f142c21c1b7303b8a0626f1baded5c72a704f7e6cd84cac00286bee0000000043410411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3ac00000000
Hash type: 01000000

If the signature from block 170 is a valid signature for this blob of binary data we just created, using the public key from block 9, then the transaction is valid.

Questions? Comments? Post below!

Rid Yourself of Smart Quotes, Smart Dashes, and Automatic Spelling Correction on Mac OS

| Comments

Have you ever pasted working a bash script or piece of code into a text editor and had it fail to work when you copied it back out later? You’ve probably fallen victim to smart quotes, smart dashes, or automatic spelling correction.

For example, during development I write scripts in Evernote and two very common things happen:

  • aws is persistently and annoyingly replaced by the word “was”
  • Commands that include double or single quotes have those quotes replaced with scripting hostile quotes that shells don’t understand

In Mac OS we can fix this in a few steps:

  1. Open System Preferences and click on Keyboard Keyboard
  2. Click “Text” Keyboard
  3. Uncheck “Use smart quotes and dashes” Keyboard
  4. Uncheck “Correct spelling automatically” Keyboard

You’re done! Now your settings should look like this and these “smart” features will never bother you again:

Keyboard

Using SSH Agent to Simplify Connecting to EC2

| Comments

TL;DR – Jump to the bottom and look for the eval $(ssh-agent) snippet!

Once you start using EC2 you’ll probably need to do a lot of things that involve SSH. A common frustration is having to specify your identity file when connecting to your EC2 instance. Instead of doing this:

1
ssh ubuntu@my-ec2-instance

You end up doing this:

1
ssh -i ~/.ssh/identity-file.pem ubuntu@my-ec2-instance

This gets even more complex when tools based on SSH are brought into the mix. Some of these tools don’t have a mechanism to even specify the identity file. If they do sometimes it makes the command-line really ugly and it almost always makes the script custom to a specific user. For example:

1
rsync -avz -e "ssh -p1234  -i /home/username/.ssh/identity-file.pem" ...

Is only going to work for the user username.

How do we make this a lot easier? It turns out there is a very simple way to make all of that pain go away. Whether you use rsync, unison, mosh, scp, or any of a number of other tools that make use of SSH under the hood there is a standard mechanism for SSH to manage your identity. That mechanism is called ssh-agent.

If I try to rsync directly to my EC2 instance I get this:

1
2
3
4
$ rsync -avzP ubuntu@my-ec2-instance:file-on-ec2.txt local-file.txt
Permission denied (publickey).
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: unexplained error (code 255) at io.c(226) [Receiver=3.1.1]

Instead what I want to do is start the ssh-agent, tell it about my identity file, and have the agent worry about providing my identity file when necessary. To do that I do this:

1
eval $(ssh-agent) && ssh-add ~/.ssh/identity-file.pem

Once you do that SSH will use that identity file to connect to EC2 automatically. You just need to run that in each shell you are using to connect to EC2 and you are set.

Do you have more than one identity file? You can keep running ssh-add with additional identity files and it will manage them all.

Do you want to be really lazy and load all of your identities at once? Try this:

1
eval $(ssh-agent) && ssh-add ~/.ssh/*.pem

Enjoy!

NOTE: Your pem files need to have the permission set to 400 so they can only be read by your user and not written to. Otherwise ssh-agent and ssh may refuse to use them.