Tim Mattison

Hardcore tech

Building Apache Camel Applications With Guice

| Comments

UPDATE 2015-07-27: Included instructions to run the project from the command-line

Apache Camel is a great framework for implementing Enterprise Integration Patterns. However, most of the examples you’ll find out there show you how to use it with the Spring framework. I’m much more comfortable with Google Guice since I’ve used it in more production projects.

I did find an example of how to use Guice with Apache Camel but it wasn’t commented well and involved doing a lot of extra work that didn’t provide me any benefits. So below I’ve listed the things that you’ll need to do to get Guice and Camel working together. What we are doing here is setting up Guice as a JNDI provider and automatically loading a Guice CamelModule via JNDI.

Step 1: Create a jndi.properties file in your project’s resources directory. The java.naming.factory.initial line tells JNDI to use Guice, the org.guiceyfruit.modules tells the javax.naming.InitialContext class which module it should run at startup.

jndi.properties
1
2
3
4
5
# Guice JNDI provider
java.naming.factory.initial = org.apache.camel.guice.jndi.GuiceInitialContextFactory

# list of guice modules to boot up (space separated)
org.guiceyfruit.modules = com.timmattison.CamelGuiceApplicationModule

Step 2: Create a class with a static main method that will run your Camel routes. Because JNDI and Guice do most of the work there isn’t much to do here.

com.timmattison.CamelApplication
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
package com.timmattison;

import javax.naming.InitialContext;

/**
 * Created by timmattison on 10/27/14.
 */
public class CamelApplication {
    public static void main(String[] args) throws Exception {
        // Create the Camel context with Guice
        InitialContext context = new InitialContext();

        // Loop forever
        while (true) {
            // Sleep so we don't kill the CPU
            Thread.sleep(10000);
        }
    }
}

Step 3: Create a class that extends RouteBuilder that implements a route (or multiple routes).

In my case I created a RestRoutes class that used the RESTlet framework and created a single route using the Direct component.

I moved the constants out to separate classes so they’d be easier to refer to in other places if necessary.

com.timmattison.CamelConstants
1
2
3
4
5
6
7
8
package com.timmattison;

/**
 * Created by timmattison on 10/27/14.
 */
public class CamelConstants {
    public static final String DIRECT_TEST_ROUTE_1 = "direct:testRoute1";
}
com.timmattison.HttpConstants
1
2
3
4
5
6
7
8
9
10
package com.timmattison;

/**
 * Created by timmattison on 10/27/14.
 */
public class HttpConstants {
    public static final String TEST_URL_1 = "/test1";
    public static final String TEST_URL_2 = "/test2";
    public static final String TEST_URL_3 = "/test3";
}
com.timmattison.RestRoutes
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
package com.timmattison;

import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.model.rest.RestBindingMode;

/**
 * Created by timmattison on 10/27/14.
 */
public class RestRoutes extends RouteBuilder {
    public static final String RESTLET = "restlet";
    public static final int PORT = 8000;

    @Override
    public void configure() throws Exception {
        restConfiguration().bindingMode(RestBindingMode.auto).component(RESTLET).port(PORT);

        rest(HttpConstants.TEST_URL_1)
                .get().to(CamelConstants.DIRECT_TEST_ROUTE_1);
    }
}

Step 4: Create the interfaces and the implementations that we’re going to use in our route.

Here we’re creating four things:

  1. The interface that we’re implementing that handles the route (SayHello1) that gets injected with Guice via JNDI. This interface doesn’t do anything other than give Guice a way to reference implementations of it.
  2. An implementation of that interface (BasicSayHello1). Also, BasicSayHello1 is going to have a dependency that we want injected with Guice to make the example more complete.
  3. The interface for the class that we want Guice to inject (MessageHandler)
  4. The implementation that gets injected (BasicMessageHandler)
com.timmattison.jndibeans.interfaces.SayHello1
1
2
3
4
5
6
7
8
9
package com.timmattison.jndibeans.interfaces;

import org.apache.camel.Processor;

/**
 * Created by timmattison on 10/27/14.
 */
public interface SayHello1 extends Processor {
}
com.timmattison.jndibeans.BasicSayHello1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
package com.timmattison.jndibeans;

import com.timmattison.jndibeans.interfaces.SayHello1;
import com.timmattison.nonjndibeans.interfaces.MessageHandler;
import org.apache.camel.Exchange;

import javax.inject.Inject;

/**
 * Created by timmattison on 10/27/14.
 */
public class BasicSayHello1 implements SayHello1 {
    private final MessageHandler messageHandler;

    @Inject
    public BasicSayHello1(MessageHandler messageHandler) {
        this.messageHandler = messageHandler;
    }

    @Override
    public void process(Exchange exchange) throws Exception {
        exchange.getOut().setBody(messageHandler.getMessage(getClass().getName()));
    }
}
com.timmattison.nonjndibeans.interfaces.MessageHandler
1
2
3
4
5
6
7
8
package com.timmattison.nonjndibeans.interfaces;

/**
 * Created by timmattison on 10/28/14.
 */
public interface MessageHandler {
    public String getMessage(String input);
}
com.timmattison.nonjndibeans.BasicMessageHandler
1
2
3
4
5
6
7
8
9
10
11
12
13
package com.timmattison.nonjndibeans;

import com.timmattison.nonjndibeans.interfaces.MessageHandler;

/**
 * Created by timmattison on 10/28/14.
 */
public class BasicMessageHandler implements MessageHandler {
    @Override
    public String getMessage(String input) {
        return "Hello " + input + "!";
    }
}

Step 5: Create the direct route that handles the route from RestRoutes

com.timmattison.DirectTestRoutes
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
package com.timmattison;

import com.timmattison.jndibeans.interfaces.SayHello1;
import com.timmattison.jndibeans.interfaces.SayHello2;
import com.timmattison.jndibeans.interfaces.SayHello3;
import org.apache.camel.builder.RouteBuilder;

/**
 * Created by timmattison on 10/27/14.
 */
public class DirectTestRoutes extends RouteBuilder {
    @Override
    public void configure() throws Exception {
        from(CamelConstants.DIRECT_TEST_ROUTE_1)
                .beanRef(SayHello1.class.getName());
    }
}

Step 6: Create a Guice module that extends CamelModuleWithMatchingRoutes. I bound my SayHello1 interface to BasicSayHello1, MessageHandler to BasicMessageHandler, and included my RestRoutes and DirectTestRoutes.

com.timmattison.CamelGuiceApplicationModule
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
package com.timmattison;

import com.timmattison.jndibeans.BasicSayHello1;
import com.timmattison.jndibeans.interfaces.SayHello1;
import com.timmattison.nonjndibeans.BasicMessageHandler;
import com.timmattison.nonjndibeans.interfaces.MessageHandler;
import org.apache.camel.guice.CamelModuleWithMatchingRoutes;

/**
 * Created by timmattison on 10/27/14.
 */
public class CamelGuiceApplicationModule extends CamelModuleWithMatchingRoutes {
    @Override
    protected void configure() {
        super.configure();

        bind(SayHello1.class).to(BasicSayHello1.class);

        bind(MessageHandler.class).to(BasicMessageHandler.class);

        bind(RestRoutes.class);
        bind(DirectTestRoutes.class);
    }
}

Now if you don’t want Guice to handle any external JNDI bindings then you’re done. You can run this application as-is and it will serve up the RESTlet route.

To run the application from Maven do this:

com.timmattison.CamelGuiceApplicationModule
1
mvn clean compile exec:java

You can test it by using cURL like this:

1
2
$ curl http://localhost:8000/test1
Hello com.timmattison.jndibeans.BasicSayHello1!

If you want to have Guice handle JNDI bindings you can easily add those into your module. For example, if I wanted to be able to get an instance of SayHello1 by using the JNDI name sayHello1FromGuice I could add this to my module:

1
2
3
4
5
    @Provides
    @JndiBind("sayHello1FromGuice")
    SayHello1 sayHello1FromGuice(Injector injector) {
        return injector.getInstance(SayHello1.class);
    }

This tells JNDI that our Guice provider will handle any JNDI requests for this name. Luckily, we didn’t have to create any of these manually because Guice automatically creates JNDI bindings for anything that you’ve called bind on using its class name.

For example there is an automatic JNDI binding for com.timmattison.jndibeans.interfaces.SayHello1 because we called bind(SayHello1.class).to(BasicSayHello1.class). If we ever want an instance of whatever Guice has bound to this we can ask JNDI for it using SayHello1.class.getName().

You’ll notice that in our DirectTestRoutes class we routed the direct test route to beanRef with the parameter SayHello1.class.getName(). That’s all you need to do as you add more classes to your Camel routes.

Want to try this out without building everything from scratch? Head over to my apache-camel-guice repo on Github.

Good luck! Don’t forget to post in the comments!

Hacking Together a Super Simple Webserver With Netcat on a Raspberry Pi

| Comments

A few months ago I wanted to get some data out of WeatherGoose II Climate Monitor so I could convert it into JSON and consume it in another application. I hacked something together and converted their format to JSON in a few hours as a proof-of-concept and the code sat for a few months.

A co-worker recently asked me if they could hook up to my script with a browser to try to do some visualization. I didn’t want to install Apache or nginx as a front end and I didn’t want to modify the script to run its own webserver so I came up with a one-liner that uses netcat to get the output of my script into their browser.

But wait! netcat has an option for this. However, on the Raspberry Pi it is not available and I didn’t want to start downloading new versions.

Here it is:

1
SCRIPT="./weathergoose.py 192.168.1.99" && PORT="8080" && while true; do $SCRIPT | nc -l -p $PORT; done

You’ll need to set SCRIPT to the script you want to run (including any parameters it needs) and PORT to the port you want to listen on.

Be careful! This is not a real webserver. This just spits your scripts output back to the browser. Anything the browser sends to the script is ignored.

Also, the script runs first and pipes its output to netcat. This happens before netcat accepts a connection and can cause some confusion. Here’s a concrete example.

Assume I wrote a script that just returns the time. If I use the above snippet and start it at 5:00 PM but I hit it with my web browser at 5:15 PM the time that I get back will be 5:00 PM. The next time I hit it it will be 5:15 PM. The easiest way to think about it is that you get the data from when the script started or at the time of the previous request, whichever is later.

I hope to come up with a fix for this issue but I haven’t had the time yet. Do you have a fix? Does this work for you? Post in the comments below.

The First Bitcoin Transaction

| Comments

Want to understand how Bitcoin transactions work? Follow my next couple of posts for step-by-step explanations of what is going on behind the scenes.

NOTE: These posts are going to be extremely technical.

In this post I’m going to explain the very first Bitcoin transaction in excruciating detail. The first Bitcoin transaction is not the first block ever mined. The first Bitcoin transaction occurred in block 170 when the first Bitcoins were transferred from one address to another.

Each Bitcoin block contains transactions. The first transaction is called the coinbase and is a transaction that actually mines/creates new Bitcoins. All transactions after that are some kind of balance transfer from a set of addresses to another set of addresses.

Here is the first, non-mining transaction from block 170:

Version number: 01000000

Input counter: 01
Input script #0: c997a5e56e104102fa209c6a852dd90660a20b2d9c352423edce25857fcd3704000000004847304402204e45e16932b8af514961a1d3a1a25fdf3f4f7732e9d624c6c61548ab5fb8cd410220181522ec8eca07de4860a4acdd12909d831cc56cbbac4622082221a8768d1d090147304402204e45e16932b8af514961a1d3a1a25fdf3f4f7732e9d624c6c61548ab5fb8cd410220181522ec8eca07de4860a4acdd12909d831cc56cbbac4622082221a8768d1d0901ffffffff

Output counter: 02
Output script #0: 00ca9a3b00000000434104ae1a62fe09c5f51b13905f07f06b99a2f7159b2225f374cd378d71302fa28414e7aab37397f554a7df5f142c21c1b7303b8a0626f1baded5c72a704f7e6cd84cac
Output script #1: 00286bee0000000043410411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3ac00000000

Version number is the little endian representation of the version number of this transaction. Future transactions in a different format could have a different version number so they can be processed in new ways.

Input counter tells us that we should expect 1 input.

Input script #0 contains all the bytes in our input script.

Output coutner tells us that we should expect 2 outputs.

Output scripts #0 and #1 contain all the bytes in the two output scripts in this transaction. These outputs show where the Bitcoins are going. In this case they’re being sent to two different addresses.

Let’s look at the input first.

Previous transaction hash: c997a5e56e104102fa209c6a852dd90660a20b2d9c352423edce25857fcd3704
Previous output index: 00000000
Length: 0x48
VIRTUAL_OP_PUSH: 0x47
Bytes to push: 304402204e45e16932b8af514961a1d3a1a25fdf3f4f7732e9d624c6c61548ab5fb8cd410220181522ec8eca07de4860a4acdd12909d831cc56cbbac4622082221a8768d1d0901
Sequence number: ffffffff

Previous transaction hash tells us where to find the transaction that this input is working on. This transaction hash refers to the coinbase in block 9 which mined 50 BTC.

Previous output index tells us which output script in the transaction we should apply this input script to. In this transaction in block 9 there was only one output.

Length tells us the number of bytes that are coming up in our input script. In this case it is 0x48 or 72 bytes.

Now we’re at the actual input script. This input script consists of a single operation (VIRTUAL_OP_PUSH) which pushes a 71 byte value onto the stack. That 71 byte value is a signature that signs the previous output and the new output so that we make sure know that the person unlocking the coins is the same person spending the coins.

Bitcoin uses the ECC curve secp256k1 which is part of the SEC 2: Recommended Elliptic Curve Domain Parameters. Therefore all signing and validation operations are performed with the parameters from this curve.

The really interesting part is how we do the transaction validation. That requires a lot of explanation… as if this wasn’t long and complicated enough already.

Let’s look at the output script from block 9:

VIRTUAL_OP_PUSH: 0x41
Bytes to push: 0411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3
OP_CHECKSIG: 0xac

This script pushes a value onto the stack (which happens to be a public key) and the calls OP_CHECKSIG. This is called a pay-to-pubkey transaction. Simply it says that anyone who can create a signed transaction with a certain public key can spend this output.

OP_CHECKSIG does four things:

  1. Pops a value off of the stack and calls it the public key
  2. Pops a value off of the stack and calls it the signature
  3. Grabs data from the previous transaction and the current transaction and combines it in a particular way
  4. Computes and checks that the data from step #3 matches the public key and signature from steps #1 and #2

Now we concatenate the input and output scripts into one larger script and get this:

VIRTUAL_OP_PUSH - 71 bytes: 0x47
Signature from block 170: 304402204e45e16932b8af514961a1d3a1a25fdf3f4f7732e9d624c6c61548ab5fb8cd410220181522ec8eca07de4860a4acdd12909d831cc56cbbac4622082221a8768d1d0901

VIRTUAL_OP_PUSH - 65 bytes: 0x41
Public key from block 9: 0411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3

OP_CHECKSIG: ac

This is the most straightforward part of the process. We are pushing the data from the input script from block 170 and then pushing the data from the output script from block 9 and executing OP_CHECKSIG. This ordering makes sure that the person that originally had the Bitcoins maintains control over the final execution. Otherwise it would be possible for an attacker to just dump everything off of the stack except for a final value of 1 which would unlock the coins.

When the Bitcoin state machine sees OP_CHECKSIG then the real work begins.

From above we know we pop the public key off of the stack and then pop the signature off of the stack. Now we need to understand step #3 where we find the data that we’re checking the signature of.

Step 1 – Get a copy of the previous transaction script data/output transaction script data (VIRTUAL_OP_PUSH and OP_CHECKSIG) which will be

VIRTUAL_OP_PUSH: 0x41
Bytes to push: 0411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3
OP_CHECKSIG: 0xac

We will refer to this as our “new input script”.

Step 2 – Get a copy of the current transaction (again, from block 170)

Current transaction: 0100000001c997a5e56e104102fa209c6a852dd90660a20b2d9c352423edce25857fcd3704000000004847304402204e45e16932b8af514961a1d3a1a25fdf3f4f7732e9d624c6c61548ab5fb8cd410220181522ec8eca07de4860a4acdd12909d831cc56cbbac4622082221a8768d1d0901ffffffff0200ca9a3b00000000434104ae1a62fe09c5f51b13905f07f06b99a2f7159b2225f374cd378d71302fa28414e7aab37397f554a7df5f142c21c1b7303b8a0626f1baded5c72a704f7e6cd84cac00286bee0000000043410411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3ac00000000

Step 3 – Clear out all of the inputs’ script data from this transaction

Before:

:0100000001c997a5e56e104102fa209c6a852dd90660a20b2d9c352423edce25857fcd370400000000
Section to remove: 4847304402204e45e16932b8af514961a1d3a1a25fdf3f4f7732e9d624c6c61548ab5fb8cd410220181522ec8eca07de4860a4acdd12909d831cc56cbbac4622082221a8768d1d0901
:ffffffff0200ca9a3b00000000434104ae1a62fe09c5f51b13905f07f06b99a2f7159b2225f374cd378d71302fa28414e7aab37397f554a7df5f142c21c1b7303b8a0626f1baded5c72a704f7e6cd84cac00286bee0000000043410411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3ac00000000

After:

:0100000001c997a5e56e104102fa209c6a852dd90660a20b2d9c352423edce25857fcd370400000000
NULL placeholder: 00
:ffffffff0200ca9a3b00000000434104ae1a62fe09c5f51b13905f07f06b99a2f7159b2225f374cd378d71302fa28414e7aab37397f554a7df5f142c21c1b7303b8a0626f1baded5c72a704f7e6cd84cac00286bee0000000043410411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3ac00000000

The “after” block translates to:

Version number: 01000000
Input counter: 01

Remaining data from input #0: c997a5e56e104102fa209c6a852dd90660a20b2d9c352423edce25857fcd37040000000000ffffffff

Output counter: 02
Output script #0: 00ca9a3b00000000434104ae1a62fe09c5f51b13905f07f06b99a2f7159b2225f374cd378d71302fa28414e7aab37397f554a7df5f142c21c1b7303b8a0626f1baded5c72a704f7e6cd84cac

Output script #1: 00286bee0000000043410411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3ac00000000

Step 4 – Now remove all of the OP_CODESEPARATORS from the new input script. In block 170 there aren’t any of them so the new input script doesn’t change.

Step 5 – Put the new input script into the signing data at the current input number. In step #3 this means the new input script goes where the NULL placeholder was. This yields:

:0100000001c997a5e56e104102fa209c6a852dd90660a20b2d9c352423edce25857fcd37040000000043410411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3acffffffff0200ca9a3b00000000434104ae1a62fe09c5f51b13905f07f06b99a2f7159b2225f374cd378d71302fa28414e7aab37397f554a7df5f142c21c1b7303b8a0626f1baded5c72a704f7e6cd84cac00286bee0000000043410411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3ac00000000

Which translates to:

Version number: 01000000
Input counter: 01
Previous transaction hash: c997a5e56e104102fa209c6a852dd90660a20b2d9c352423edce25857fcd3704
Previous output index: 00000000
Input script length: 0x43
VIRTUAL_OP_PUSH Input #0: 0x41
Bytes to push Input #0: 0411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3
OP_CHECKSIG Input #0: 0xac
Sequence number: ffffffff

Value bytes: 00ca9a3b00000000
Output script length: 0x43
VIRTUAL_OP_PUSH Output #0: 0x41
Bytes to push Output #0: 04ae1a62fe09c5f51b13905f07f06b99a2f7159b2225f374cd378d71302fa28414e7aab37397f554a7df5f142c21c1b7303b8a0626f1baded5c72a704f7e6cd84c
OP_CHECKSIG Output #0: 0xac

Value bytes: 00286bee00000000
Output script length: 0x43
VIRTUAL_OP_PUSH Output #1: 0x41
Bytes to push Output #1: 0411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3
OP_CHECKSIG Output #1: 0xac
Lock time: 00000000

The value bytes represent the number of Satoshi (1 / 100,000,000th of a Bitcoin) are being transferred. The input was a mined block that created 50 BTC and the two output blocks are getting 10 and 40 BTC, respectively. Like all other value types in Bitcoin these values are little-endian.

Step 6 – Add the 32-bit little endian representation of the hash type onto the end of the signing data. The hash type is the last byte of the signature which is 0x01 in this case. Expanded into a 32-bit little endian value makes it 0x01000000. So our final data that needs to be signed is:

:0100000001c997a5e56e104102fa209c6a852dd90660a20b2d9c352423edce25857fcd37040000000043410411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3acffffffff0200ca9a3b00000000434104ae1a62fe09c5f51b13905f07f06b99a2f7159b2225f374cd378d71302fa28414e7aab37397f554a7df5f142c21c1b7303b8a0626f1baded5c72a704f7e6cd84cac00286bee0000000043410411db93e1dcdb8a016b49840f8c53bc1eb68a382e97b1482ecad7b148a6909a5cb2e0eaddfb84ccf9744464f82e160bfa9b8b64f9d4c03f999b8643f656b412a3ac00000000
Hash type: 01000000

If the signature from block 170 is a valid signature for this blob of binary data we just created, using the public key from block 9, then the transaction is valid.

Questions? Comments? Post below!

Rid Yourself of Smart Quotes, Smart Dashes, and Automatic Spelling Correction on Mac OS

| Comments

Have you ever pasted working a bash script or piece of code into a text editor and had it fail to work when you copied it back out later? You’ve probably fallen victim to smart quotes, smart dashes, or automatic spelling correction.

For example, during development I write scripts in Evernote and two very common things happen:

  • aws is persistently and annoyingly replaced by the word “was”
  • Commands that include double or single quotes have those quotes replaced with scripting hostile quotes that shells don’t understand

In Mac OS we can fix this in a few steps:

  1. Open System Preferences and click on Keyboard Keyboard
  2. Click “Text” Keyboard
  3. Uncheck “Use smart quotes and dashes” Keyboard
  4. Uncheck “Correct spelling automatically” Keyboard

You’re done! Now your settings should look like this and these “smart” features will never bother you again:

Keyboard

Using SSH Agent to Simplify Connecting to EC2

| Comments

TL;DR – Jump to the bottom and look for the eval $(ssh-agent) snippet!

Once you start using EC2 you’ll probably need to do a lot of things that involve SSH. A common frustration is having to specify your identity file when connecting to your EC2 instance. Instead of doing this:

1
ssh ubuntu@my-ec2-instance

You end up doing this:

1
ssh -i ~/.ssh/identity-file.pem ubuntu@my-ec2-instance

This gets even more complex when tools based on SSH are brought into the mix. Some of these tools don’t have a mechanism to even specify the identity file. If they do sometimes it makes the command-line really ugly and it almost always makes the script custom to a specific user. For example:

1
rsync -avz -e "ssh -p1234  -i /home/username/.ssh/identity-file.pem" ...

Is only going to work for the user username.

How do we make this a lot easier? It turns out there is a very simple way to make all of that pain go away. Whether you use rsync, unison, mosh, scp, or any of a number of other tools that make use of SSH under the hood there is a standard mechanism for SSH to manage your identity. That mechanism is called ssh-agent.

If I try to rsync directly to my EC2 instance I get this:

1
2
3
4
$ rsync -avzP ubuntu@my-ec2-instance:file-on-ec2.txt local-file.txt
Permission denied (publickey).
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: unexplained error (code 255) at io.c(226) [Receiver=3.1.1]

Instead what I want to do is start the ssh-agent, tell it about my identity file, and have the agent worry about providing my identity file when necessary. To do that I do this:

1
eval $(ssh-agent) && ssh-add ~/.ssh/identity-file.pem

Once you do that SSH will use that identity file to connect to EC2 automatically. You just need to run that in each shell you are using to connect to EC2 and you are set.

Do you have more than one identity file? You can keep running ssh-add with additional identity files and it will manage them all.

Do you want to be really lazy and load all of your identities at once? Try this:

1
eval $(ssh-agent) && ssh-add ~/.ssh/*.pem

Enjoy!

NOTE: Your pem files need to have the permission set to 400 so they can only be read by your user and not written to. Otherwise ssh-agent and ssh may refuse to use them.

Full Example Code Showing How to Use Guice and Jetty

| Comments

Today I spent a significant amount of time wrestling Jetty and Guice in order to get a very simple configuration up and running. Many articles I found on this topic are incomplete or out of date so here is a start to finish example of how to get Guice and Jetty working together without any web.xml.

Step 0 – Add these dependencies to your pom.xml if they aren’t there already

1
2
3
4
5
6
7
8
9
10
    <dependency>
        <groupId>com.google.inject</groupId>
        <artifactId>guice</artifactId>
        <version>3.0</version>
    </dependency>
    <dependency>
        <groupId>com.google.inject.extensions</groupId>
        <artifactId>guice-servlet</artifactId>
        <version>3.0</version>
    </dependency>

Step 1 – Create a module that describes your servlet configuration. Assume we have three servlets. One is called FooServlet and is served on the “/foo” path. One is called BarServlet and is served on the “/bar” path. One is called IndexServlet and is served for all other paths.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
import com.google.inject.servlet.ServletModule;

public class ApplicationServletModule extends ServletModule {
    @Override
    protected void configureServlets() {
        bind(FooServlet.class);
        bind(BarServlet.class);
        bind(IndexServlet.class);

        serve("/foo").with(FooServlet.class);
        serve("/bar").with(BarServlet.class);
        serve("/*").with(IndexServlet.class);
    }
}

Step 2 – Create a module that contains your Guice bindings. We’ll assume you have something called NonServletImplementation you want bound to NonServletInterface that you’ll need to have injected into your servlets.

1
2
3
4
5
6
7
import com.google.inject.AbstractModule;

public class NonServletModule extends AbstractModule {
    protected void configure() {
        bind(NonServletInterface.class).to(NonServletImplementation.class);
    }
}

Step 3 – Instantiate your injector with all of your modules in the code where you want to create the server. If you have other modules you want to include you should include those as well.

1
2
3
NonServletModule nonServletModule = new NonServletModule();
ApplicationServletModule applicationServletModule = new ApplicationServletModule();
Injector injector = Guice.createInjector(nonServletModule, applicationServletModule);

Step 4 – Instantiate the server. You do not need to pass it the injector explicitly. Guice will handle that for you but you MUST instantiate the injector before this code runs.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
int port = 8080;
Server server = new Server(port);

ServletContextHandler servletContextHandler = new ServletContextHandler(server, "/", ServletContextHandler.SESSIONS);
servletContextHandler.addFilter(GuiceFilter.class, "/*", EnumSet.allOf(DispatcherType.class));

// You MUST add DefaultServlet or your server will always return 404s
servletContextHandler.addServlet(DefaultServlet.class, "/");

// Start the server
server.start();

// Wait until the server exits
server.join();

Step 5 – Make sure your servlets are setup to use Guice and use the @Singleton annotation. Only the FooServlet skeleton is shown here but you should create the BarServlet and the IndexServlet as well.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
import javax.inject.Inject;
import javax.inject.Singleton;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;

/**
 * Created by timmattison on 8/4/14.
 */
@Singleton
public class FooServlet extends HttpServlet {
    private final NonServletInterface nonServletInterface;

    @Inject
    public FooServlet(NonServletInterface nonServletInterface) {
        this.nonServletInterface = nonServletInterface;
    }

    protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
        // Do whatever you need to do with POSTs
        ...
    }

    protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
        // Do whatever you need to do with GETs
        ...
    }
}

If all goes well then everything will be wired up with Guice and your Jetty server is ready to rock. It turns out to be a lot simpler than working with the web.xml in my opinion since everything is mapped out explicitly in one place.

Using Amazon STS Credentials Inside of a Properties File

| Comments

Amazon provides several credentials providers in their Java API that let you use IAM user credentials various ways. The credentials can come from IMDS, environment variables, or a properties file, just to name a few.

If you’re developing and debugging and you need to use STS credentials your options are a bit more limited. To help deal with this I came up with a few bits of code that, for me at least, make it significantly easier.

First, there’s an awscredentials.properties file format you need to follow that looks like this:

1
2
3
aws.accessKeyId=XXXXXXXXXXXXXXXXXXX
aws.secretAccessKey=YYYYYYYYYYYYYYYYYYY
aws.sessionToken=ZZZZZZZZZZZZZZZZZZZ

Replace the X, Y, and Z strings with your credentials and put them in your resources directory where the classloader can find them. DO NOT COMMIT THEM TO SOURCE CONTROL!

Next, there’s a method that loads these credentials into the system properties:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
private static final String AWSCREDENTIALS_PROPERTIES = "awscredentials.properties";

void loadAwsCredentialsProperties() throws IOException {
  InputStream inputStream = this.getClass().getClassLoader().getResourceAsStream(AWSCREDENTIALS_PROPERTIES);
  
  // Was there a properties file?
  if (inputStream == null) {
      // No, just return
      return;
  }
  
  Properties properties = new Properties(System.getProperties());
  properties.load(inputStream);
  
  // set the system properties
  System.setProperties(properties);
}

Finally, there’s the credentials provider:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
import com.amazonaws.AmazonClientException;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.AWSCredentialsProvider;
import com.amazonaws.auth.BasicSessionCredentials;
import com.amazonaws.services.securitytoken.model.Credentials;
import com.amazonaws.util.StringUtils;

/**
 * Created by timmattison on 9/2/14.
 */
public class SystemPropertiesStsCredentialsProvider implements AWSCredentialsProvider {
    private static final String ACCESS_KEY_ID_SYSTEM_PROPERTY = "aws.accessKeyId";
    private static final String SECRET_ACCESS_KEY_SYSTEM_PROPERTY = "aws.secretAccessKey";
    private static final String SESSION_TOKEN_SYSTEM_PROPERTY = "aws.sessionToken";

    public AWSCredentials getCredentials() {
        // Get the access key ID
        String accessKeyId = StringUtils.trim(System.getProperty(ACCESS_KEY_ID_SYSTEM_PROPERTY));

        // Get the secret access key
        String secretAccessKey = StringUtils.trim(System.getProperty(SECRET_ACCESS_KEY_SYSTEM_PROPERTY));

        // Get the session token
        String sessionToken = StringUtils.trim(System.getProperty(SESSION_TOKEN_SYSTEM_PROPERTY));

        // Are we missing any of the necessary values?
        if (StringUtils.isNullOrEmpty(accessKeyId)
                || StringUtils.isNullOrEmpty(secretAccessKey)
                || StringUtils.isNullOrEmpty(sessionToken)) {
            // Yes, throw an exception like the Amazon code does
            throw new AmazonClientException(
                    "Unable to load AWS credentials from Java system "
                            + "properties (" + ACCESS_KEY_ID_SYSTEM_PROPERTY + ", "
                            + SECRET_ACCESS_KEY_SYSTEM_PROPERTY + ", and "
                            + SESSION_TOKEN_SYSTEM_PROPERTY + ")");
        }

        // Create the credentials
        Credentials sessionCredentials = new Credentials();
        sessionCredentials.setAccessKeyId(accessKeyId);
        sessionCredentials.setSecretAccessKey(secretAccessKey);
        sessionCredentials.setSessionToken(sessionToken);

        // Convert them to basic session credentials
        BasicSessionCredentials basicSessionCredentials = new BasicSessionCredentials(
                sessionCredentials.getAccessKeyId(),
                sessionCredentials.getSecretAccessKey(),
                sessionCredentials.getSessionToken());

        return basicSessionCredentials;
    }

    @Override
    public void refresh() {
        // Do nothing
    }
}

This should make things quite a bit easier if you don’t have access to a real IAM user and must use STS for your application.

Simple Snippets for Using AWS Credentials While Debugging

| Comments

While debugging and developing using the AWS SDK you’ll find that sometimes you just need to use real credentials on a box that lives outside of EC2. You should always be using Instance Metadata for your credentials inside of EC2 though. Never use this pattern inside EC2!

Also, make sure you never commit your credentials. That can be an expensive mistake when they show up on Github and people snag them to use them for Bitcoin mining.

NOTE: These snippets include @Inject and @Assisted annotations used by Guice. If you’re not using Guice remove those and the related imports.

Anyway, if you want to use static IAM user credentials you can use a credentials provider like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.AWSCredentialsProvider;
import com.google.inject.Inject;
import com.google.inject.assistedinject.Assisted;

/**
 * Created by timmattison on 9/2/14.
 */
public class TempNonStsCredentialsProvider implements AWSCredentialsProvider {
    private final String awsAccessKeyId;
    private final String awsSecretKey;

    @Inject
    public TempNonStsCredentialsProvider(@Assisted("awsAccessKeyId") String awsAccessKeyId,
                                         @Assisted("awsSecretKey") String awsSecretKey) {
        this.awsAccessKeyId = awsAccessKeyId;
        this.awsSecretKey = awsSecretKey;
    }

    @Override
    public AWSCredentials getCredentials() {
        return new AWSCredentials() {
            @Override
            public String getAWSAccessKeyId() {
                return awsAccessKeyId;
            }

            @Override
            public String getAWSSecretKey() {
                return awsSecretKey;
            }
        };
    }

    @Override
    public void refresh() {
        // Do nothing
    }
}

Pass in your credentials and you’re good to go. If you’re using STS it requires a little bit more work. Use this instead:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.AWSCredentialsProvider;
import com.amazonaws.auth.BasicSessionCredentials;
import com.amazonaws.services.securitytoken.model.Credentials;
import com.google.inject.assistedinject.Assisted;

import javax.inject.Inject;

/**
 * Created by timmattison on 9/2/14.
 */
public class TempStsCredentialsProvider implements AWSCredentialsProvider {
    private final String awsAccessKeyId;
    private final String awsSecretAccessKey;
    private final String awsSessionToken;

    @Inject
    public TempStsCredentialsProvider(@Assisted("awsAccessKeyId") String awsAccessKeyId,
                                      @Assisted("awsSecretAccessKey") String awsSecretAccessKey,
                                      @Assisted("awsSessionToken") String awsSessionToken) {
        this.awsAccessKeyId = awsAccessKeyId;
        this.awsSecretAccessKey = awsSecretAccessKey;
        this.awsSessionToken = awsSessionToken;
    }

    @Override
    public AWSCredentials getCredentials() {
        Credentials sessionCredentials = new Credentials();
        sessionCredentials.setAccessKeyId(awsAccessKeyId);
        sessionCredentials.setSecretAccessKey(awsSecretAccessKey);
        sessionCredentials.setSessionToken(awsSessionToken);

        BasicSessionCredentials basicSessionCredentials = new BasicSessionCredentials(
                sessionCredentials.getAccessKeyId(),
                sessionCredentials.getSecretAccessKey(),
                sessionCredentials.getSessionToken());

        return basicSessionCredentials;
    }

    @Override
    public void refresh() {
      // Do nothing
    }
}

Now you just need to pass in the extra session token parameter and then you can use this to provide credentials to your AWS calls.

Checking PostgreSQL to See if an Index Already Exists

| Comments

In my last post I showed you a simple way to check to see if a constraint already existed in PostgreSQL. Now I want to show you how to do the same thing for an index.

Here’s the code but keep in mind that it makes the assumption that everything is in the public schema.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
CREATE OR REPLACE FUNCTION create_index_if_not_exists (t_name text, i_name text, index_sql text) RETURNS void AS $$
DECLARE
  full_index_name varchar;
  schema_name varchar;
BEGIN

full_index_name = t_name || '_' || i_name;
schema_name = 'public';

IF NOT EXISTS (
    SELECT 1
    FROM   pg_class c
    JOIN   pg_namespace n ON n.oid = c.relnamespace
    WHERE  c.relname = full_index_name
    AND    n.nspname = schema_name
    ) THEN

    execute 'CREATE INDEX ' || full_index_name || ' ON ' || schema_name || '.' || t_name || ' ' || index_sql;
END IF;
END
$$
LANGUAGE plpgsql VOLATILE;

You can now use the function like this:

1
SELECT create_index_if_not_exists('table', 'index_name', '(column)');

No duplicated data, no exceptions. Enjoy!

Checking PostgreSQL to See if a Constraint Already Exists

| Comments

Checking to see if a constraint already exists should be easy. H2 and many other database have syntax for it.

For some reason PostgreSQL, my favorite database, doesn’t have this. I looked around and found a decent solution on Stack Overflow that I can add to my default template but something about it bothered me.

I didn’t like the fact that the code asked for the table name and constraint name but then didn’t use it in the SQL statement. Leaving it like this means that someone could write this (note that foo becomes foo2 and bar becomes bar2 in the first two parameters):

1
2
3
4
5
6
7
8
9
SELECT create_constraint_if_not_exists(
        'foo',
        'bar',
        'ALTER TABLE foo ADD CONSTRAINT bar CHECK (foobies < 100);');

SELECT create_constraint_if_not_exists(
        'foo2',
        'bar2',
        'ALTER TABLE foo ADD CONSTRAINT bar CHECK (foobies < 100);');

And they would get an exception rather than having the constraint creation be skipped which could break a lot of things that expect this function to be safe.

They also could do this (note that foo becomes foo2 and bar becomes bar2 in the constraint SQL):

1
2
3
4
5
6
7
8
9
SELECT create_constraint_if_not_exists(
        'foo',
        'bar',
        'ALTER TABLE foo ADD CONSTRAINT bar CHECK (foobies < 100);');

SELECT create_constraint_if_not_exists(
        'foo',
        'bar',
        'ALTER TABLE foo2 ADD CONSTRAINT bar2 CHECK (foobies < 100);');

This could be even worse because a constraint wouldn’t be created.

My solution was to modify this script slightly:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
CREATE OR REPLACE FUNCTION create_constraint_if_not_exists (t_name text, c_name text, constraint_sql text)
  RETURNS void
AS
$BODY$
  begin
    -- Look for our constraint
    if not exists (select constraint_name
                   from information_schema.constraint_column_usage
                   where table_name = t_name  and constraint_name = c_name) then
        execute 'ALTER TABLE ' || t_name || ' ADD CONSTRAINT ' || c_name || ' ' || constraint_sql;
    end if;
end;
$BODY$
LANGUAGE plpgsql VOLATILE;

Now you call it like this:

1
SELECT create_constraint_if_not_exists('foo', 'bar', 'CHECK (foobies < 100);');

And it will check the constraint properly by name. This doesn’t stop you from creating multiple constraints with the same criteria and different names though. That’s something you’ll need to check for manually (for now).