Tim Mattison

Hardcore tech

How-To: Get Verizon’s Media Manager to Read Content From a Network Location

| Comments

I ran into this problem today too when I first got FiOS installed. Mapping a network drive won’t work but using “subst” will. I now have Media Manager reading my pictures over a network connection. Here’s how to do it:

  1. Open the start menu

  2. Type “cmd”

  3. Right click on “cmd” and select “Run as administrator”

  4. Run subst like this:

    subst DRIVE: LOCATION

DRIVE: will need to be a free drive letter like “F:”, “G:”, etc LOCATION will need to be the UNC path#Uniform_Naming_Convention) to your network share like this “\myothercomputer\pictures\” Don’t forget to include the quotes if your LOCATION has spaces in it!

  1. Restart Media Manager and try to add the new virtual drive to it, it should start working right away

You may need to do this on each reboot. I never reboot this computer so I haven’t tested it yet. You can put these commands in a batch file to make your life easier but you’ll need to make sure the batch file runs as an administrator.

Let me know in the comments if it works for you or not. If not I can probably help work out any kinks with you.

Tip: Fix “‘Xterm’: Unknown Terminal Type” Messages in Debian

| Comments

This one has been a bit of a nuisance on newly spooled up Debian instances for me lately. When I try to run “top” or “clear” or really anything that does something with the terminal I get the following message:

'xterm': unknown terminal type.

This is because either you haven’t installed ncurses-term (unlikely) or a symlink from /lib/terminfo/x/xterm to /usr/share/terminfo/x/xterm is missing. To cover all possibilities do this:

sudo apt-get install ncurses-term
sudo ln -s /lib/terminfo/x/xterm /usr/share/terminfo/x/xterm

Poof, your terminal works again!

How-To: Write a Netduino Driver for the Grove Chainable RGB LED

| Comments

A lot of people probably look at hardware that doesn’t come with drivers for the Netduino or Arduino and don’t even consider picking it up if they’re new to this scene. In this article I’ll show you how I wrote a driver for Grove’s chainable RGB LED by just carefully reading the specs and experimenting. I am no Netduino expert, I have only written a tiny bit of code for it since I got it, but this just reinforces how easy some drivers can be to write.

Keep in mind that my illustration of how easy it was to write this driver is not a reflection on how easy it is to write all drivers. Some drivers take a ton of work. Make sure you read the documentation before you buy something so you don’t get stuck with some hardware you can’t use.

My first step was to find the documentation for the protocol for this device. I then scanned around to find the “Communication Protocol” and started digging. What this showed me is that there are two connections to support the protocol for this device. The first connection is called “CIN” for clock input and the second connection is called “DIN” for data input. Simple enough, especially if we’re using the standard Grove base shield and connectors. Just hook it up and make sure you keep track of which port you’re using and you’re ready to start programming. I used header 6 on my base shield so the relevant pins for me were D6 and D7. D6 was CIN and D7 was DIN.

Now you’ll see that there are six well defined bullet points explaining the basics of the protocol:

  • Data needs to be ready before CIN, and DIN gets into the buffer on the rising edge of CIN.

  • First 32 bits ‘0’ are Start Frame

  • Flag bit is two ‘1’

  • Calibration bits B7’,B6’;G7’,G6’ and R7’,R6’ are inverse codes of B7,B6;G7,G6 and R7,R6

  • Gray data MSB first, and the order is BLUE, GREEN, and RED

  • After all nodes data sent, need to seed another 32 bits ‘0’ to update the data

Let’s step through these one by one to figure out how to send data to this device.

They first tell us that “data needs to be ready before CIN, and DIN gets into the buffer on the rising edge of CIN”. What this really translates to for you when programming is that when you want to send a bit to the device you should set that bit on the DIN pin (either 1 or 0), then set the CIN pin to high, and then set the CIN pin back to low. Now you’ve sent one bit of data to the device. Abstraction will make it so we can do this thinking once and then fall back on it later so let’s write a function that sends one bit:

1
2
3
4
5
6
7
8
9
10
11
private void sendBit(bool bit)
{
    // Get DIN into the proper state
    din.Write(bit);

    // Set the clock high
    cin.Write(true);

    // Set the clock low
    cin.Write(false);
}

This function makes the assumption that you’ve defined cin and din elsewhere. The setup for them in my case (using pins D6 and D7 as I described above) would look like this:

1
2
3
// Use D6 for CIN and D7 for DIN (Grove Base Shield v1.2 header #6)
OutputPort cin = new OutputPort(Pins.GPIO_PIN_D6, false);
OutputPort din = new OutputPort(Pins.GPIO_PIN_D7, false);

Now they tell us that the first 32 bits are all zeroes and that this is called a start frame. This makes me think it would be a good idea to expand our abstraction to let us send bytes and then write another function that would send this start frame. That would look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
private void sendByte(byte data)
{
    // Send the bits MSB first
    sendBit((data & 0x80) == 0x80);
    sendBit((data & 0x40) == 0x40);
    sendBit((data & 0x20) == 0x20);
    sendBit((data & 0x10) == 0x10);
    sendBit((data & 0x08) == 0x08);
    sendBit((data & 0x04) == 0x04);
    sendBit((data & 0x02) == 0x02);
    sendBit((data & 0x01) == 0x01);
}

private void sendStartFrame()
{
    // The start frame is 32 bits of zeroes
    sendByte(0);
    sendByte(0);
    sendByte(0);
    sendByte(0);
}

In the sendByte function I’m taking a byte and using logical AND and equals to extract the bits one by one. One of the next bullet points says the data is MSB first so we want to get the most significant (ie. largest value holding) bits first so that’s how I went about sending the bits. Now that we can send bytes sending the start frame is as easy as calling that function four times with the value 0.

Next they talk about flag bits. In the protocol it shows that after the start frame there are some flag bits. This tells us the two flag bits are both ones. Here’s a simple function that can do that:

1
2
3
4
5
6
private void sendFlagBits()
{
    // The flag bits are two 1s
    sendBit(true);
    sendBit(true);
}

Now this part gets a bit trickier but not too bad. They tell us that we need to send the inverse of B7, B6, G7, G6, R7, R6, followed by the actual color data itself as bytes. B7 and B6 are the two highest bits in the blue color component, G7 and G6 are the two highest bits in the green color component, and R7 and R6 are the two highest bits in the red color component. Sending that data with the functions we built up now is really easy.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
private void sendColorData(byte red, byte green, byte blue)
{
    // Send the inverse bits of the B7, B6, G7, G6, R7, R6
    sendBit((blue & 0x80) != 0x80);
    sendBit((blue & 0x40) != 0x40);
    sendBit((green & 0x80) != 0x80);
    sendBit((green & 0x40) != 0x40);
    sendBit((red & 0x80) != 0x80);
    sendBit((red & 0x40) != 0x40);

    // Send the actual colors
    sendByte((byte)blue);
    sendByte((byte)green);
    sendByte((byte)red);
}

We’re almost there, there’s only one step left! Now we need to send the end frame. It turns out that the end frame is the same as the start frame but to keep the code readable I did this:

1
2
3
4
5
private void sendEndFrame()
{
    // The end frame is the same as the start frame
    sendStartFrame();
}

Now you have enough information to send a color to your device. We should probably wrap it up so that we can make it even easier to use though. Let’s just think about how this is going to be used in practice. A typical user will have a few of these LEDs chained together but for testing you might want to use just one. We know that the protocol requires a start frame, then flag bits, then color data, and the end frame if we use a single LED but for two LEDs it looks like this:

  • Send start frame

  • Send flag bits

  • Send color data

  • Send flag bits

  • Send color data

  • Send end frame

So for our first LED we want to send the start frame, the flag bits and the color data. For our last LED we want to send flag bits, the color data, and the end frame. Here’s a function that does that:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
private void setColor(byte red, byte green, byte blue, bool first, bool last)
{
    // Is this the first color?
    if (first)
    {
        // Yes, send the start frame
        sendStartFrame();
    }
    else
    {
        // No, do nothing
    }

    // Send the flag bits
    sendFlagBits();

    // Send the colors
    sendColorData(red, green, blue);

    // Is this the last color?
    if (last)
    {
        // Yes, send the end frame
        sendEndFrame();
    }
    else
    {
        // No, do nothing
    }
}

The extra else blocks have no impact on the executable generated so they’re just there for clarity. You can remove them if you want. Now if you want to send a bunch of colors to a string of three LEDs you can do this:

1
2
3
setColor(255, 0, 0, true, false);
setColor(0, 255, 0, false, false);
setColor(0, 0, 255, false, true);

That would set a string of three LEDs to solid red, solid green, and solid blue. That’s it, your driver is written!

Check out my driver on Github to see a few more enhancements I added. My code has an abstraction of a color from three integers into an RGB object so it’s easier to pass around and also has a function that can set a string of LEDs from an array of RGB objects. There’s some sample code as well and if you want to see the system in action check out these simple videos:

Post in the comments and share your thoughts and project ideas. If you use this library please let me know!

How-To: Fix Maven Errors in Eclipse When Getting Started With Heroku

| Comments

I haven’t used Heroku much yet but with the addition of Java to their platform I’m starting to see it as a really interesting option. Yesterday I watched a great video on how to get started with Java on Heroku. It went well until I tried converting my project to a Maven project. Then I got this error message in all of my pom.xml files:

Plugin execution not covered by lifecycle configuration

I checked the usual places but didn’t find a solution to the issue. Then I decided to try adding the m2e plugin from this update site:

http://download.eclipse.org/technology/m2e/releases

After adding the plugin and restarting my IDE I got two different error messages:

maven-dependency-plugin (goals "copy-dependencies", "unpack") is not supported by m2e. Project configuration is not up-to-date with pom.xml. Run project configuration update.

The second error had a quick fix so I tried that and it worked. Now the Java example application that uses the Play framework and the one that uses Spring MVC and Hibernate both work. However, the ones that used JAX-RS and embedded Jetty did not. They still showed the maven-dependency-plugin error. The fix is to add the following XML in the build section of your pom.xml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
<pluginmanagement>
  <plugins>
      <plugin>
          <groupid>org.eclipse.m2e</groupid>
              <artifactid>lifecycle-mapping</artifactid>
              <version>1.0.0</version>
              <configuration>
                  <lifecyclemappingmetadata>
                      <pluginexecutions>
                          <pluginexecution>
                              <pluginexecutionfilter>
                                  <groupid>org.apache.maven.plugins</groupid>
                                  <artifactid>maven-dependency-plugin</artifactid>
                                  <versionrange>[1.0.0,)</versionrange>
                                  <goals>
                                      <goal>copy-dependencies</goal>
                                  </goals>
                              </pluginexecutionfilter>
                              <action>
                                  <ignore></ignore>
                              </action>
                          </pluginexecution>
                      </pluginexecutions>
                  </lifecyclemappingmetadata>
              </configuration>
          </plugin>
      </plugins>
  </pluginmanagement>

After that you’ll have to do the quick fix for the error “Project configuration is not up-to-date” again and then you’ll be error free, at least in your pom.xml…

Post in the comments and let me know if it worked or if you need any help.

Tip: Handle Failed Tasks Throwing “ENOENT” Errors in Hadoop

| Comments

Today when I tried to run a new Hadoop job I got the following error:

     [exec] 12/03/21 22:51:47 INFO mapred.JobClient: Task Id : attempt_201203212250_0001_m_000002_1, Status : FAILED
     [exec] Error initializing attempt_201203212250_0001_m_000002_1:
     [exec] ENOENT: No such file or directory
     [exec]     at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method)
     [exec]     at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:521)
     [exec]     at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344)
     [exec]     at org.apache.hadoop.mapred.JobLocalizer.initializeJobLogDir(JobLocalizer.java:240)
     [exec]     at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:216)
     [exec]     at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1352)
     [exec]     at java.security.AccessController.doPrivileged(Native Method)
     [exec]     at javax.security.auth.Subject.doAs(Subject.java:416)
     [exec]     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157)
     [exec]     at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1327)
     [exec]     at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1242)
     [exec]     at org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2541)
     [exec]     at org.apac

It wasn’t immediately apparent to me what file wasn’t found from the error messages so I checked the logs, the JobTracker, my code, ran some known good jobs that also failed, basically everything I could think of. It turns out that due to me accidentally running a script as “root” (don’t worry, it was only on my desktop) that the permissions of several files in the hdfs user’s home directory had changed ownership to “root”. Because of that Hadoop was unable to create files in the /usr/lib/hadoop-0.20 directory.

NOTE: These steps assume you are using Hadoop 0.20. Adjust the paths in the commands accordingly if you aren’t.

If you want a quick fix try these steps (only if you take full responsibility for anything that may go wrong):

  1. Stop Hadoop using the stop-all.sh script as the hdfs user

  2. su to the hdfs user

  3. Run this:

    chown -R hdfs:hdfs /usr/lib/hadoop-0.20 /var/*/hadoop-0.20

  4. Restart Hadoop using the start-all.sh script as the hdfs user

Now your jobs should start running again. Post in the comments if this procedure works for you or if you need any help.

How-To: Install Perl Debugging in Eclipse on Debian/Ubuntu

| Comments

If you’re looking to use Eclipse as a debugger for your Perl scripts things can get a bit hairy quickly. You need to do a lot of things to get it to be happy so let’s step through them all rather than have you hunt for the secret sauce like I did today.

First, you’ll want to add the EPIC (Eclipse Perl Integration Component) as described on the EPIC site. That will add support for creating Perl projects, syntax highlighting, and all that.

Next, set a breakpoint in one of your Perl scripts and try to debug it. If you’re unlucky you may get one of two error messages. One error message wants you to install PadWalker which is a Perl module that handles all of the debugging niceties for Eclipse. To install that you can either use CPAN or apt. Using apt is as simple as:

sudo apt-get install libpadwalker-perl

Once you install PadWalker restart Perl and try to debug one of your scripts again. If it works, you’re set. The second possible error message is below…

Now, you’ve come all this way and it still doesn’t work. You’ve probably received an error message like this:

Could not create the view: Plug-in "org.eclipse.debug.ui" was unable to instantiate class "org.eclipse.debug.internal.ui.views.variables.VariablesView".

If you dig deeper you’ll see errors like this:

java.lang.ClassCircularityError: org/eclipse/debug/internal/ui/DebugUIPlugin

And if you dig even deeper you’ll see errors like this:

Conflict for 'org.epic.perleditor.commands.clearMarker'

The fix for this was tricky to figure out so just follow these steps:

  1. Close Eclipse

  2. Uninstall libpadwalker-perl by running

    sudo apt-get remove —purge libpadwalker-perl

  3. Restart Eclipse and try to set a breakpoint in a Perl script, it should fail (no breakpoint should appear)

  4. Close Eclipse

  5. Reinstall libpadwalker-perl by running

    sudo apt-get install libpadwalker-perl

  6. Restart Eclipse, set a breakpoint, and start debugging again

At this point the variables and breakpoints should always work. Unfortunately the expressions panel will not. It looks like this is not supported in EPIC just yet. But, in any case, you now have a full fledged Perl debugger so you can (mostly) stop using print statements to debug your code post mortem.

There are some quirks to note:

  1. “Step Over” (typically F6) does not work as expected and will step into modules. If “Step Return” worked this wouldn’t be a problem but it doesn’t (see the next bullet point). In this case if you are trying to step over a module you may have to back out and set a breakpoint where the execution will return to the script you want to debug.

  2. “Step Return” (typically F7) does not work as expected. It will usually run until your script ends or hits a breakpoint.

  3. The console window will not let you run arbitrary Perl code so it’s not a simple replacement for the expressions panel

  4. Perl modules (files with a .pm extension) may not appear with syntax highlighting enabled. If you are debugging Perl modules you may want to retool your setup and run the module as a Perl script OR have Perl load your module from a file with a .pl extension.

Good luck. Now clean up/fix that Perl code and post in the comments.

Tip: Getting the Right Static Imports Necessary for Basic JUnit Testing

| Comments

I’ve written plenty of JUnit tests in the past but usually I’m building onto an existing codebase of tests. In the past few days I’ve been playing around with Unicode and wanted to copy a code snippet from a Hadoop book to see how everything looks in the debugger. When I entered the code I realized that I was missing some methods that I needed to complete the tests.

Specifically I was trying to use assertThat() and is() but didn’t know where to find them. After a bit of Googling I found the two static imports that I needed to copy the code without qualifying assertThat() as Assert.assertThat() and the same goes for is(). They are:

import static org.hamcrest.CoreMatchers.is;
import static org.junit.Assert.assertThat;

I have to admit that org.hamcrest is a bit less obvious than I would have liked. :)

Tip: A Quick Primer on Waiting on Multiple Threads in Java

| Comments

Last night I was writing some code to do some performance testing on HDFS. I noticed that single threaded performance wasn’t anywhere near as good as I expected and my CPUs were spending most of their time idle. I decided to add some threads into the process to see if a multi-threaded speed test would consume some of that idle CPU. It worked as expected so I figured I would share some basic knowledge on how to I started up multiple threads, had them do their work, waited for them to finish without polling, and then recorded the total duration to calculate my statistics.

What you’ll need to do first is decide what you want to do in the processing thread. This code will go into a Java Runnable like this:

Runnable runnable = new Runnable() {
    @Override
    public void run() {
        // Do something exciting here
    }
};

Next you’ll need to decide how many threads you want to run. If you wanted to run four threads you could do this:

int threadCount = 4;

for (int threadLoop = 0; threadLoop < threadCount; threadLoop++) {
    // XXX - Put the runnable block from above right here

    // Create a new thread
    Thread thread = new Thread(runnable);

    // Add the thread to our thread list
    threads.add(thread);

    // Start the thread
    thread.start();
}

That will start four threads. It’s best to use a variable so you can update it and use it in other places like calculating your statistics. Now let’s wait for all the threads to finish:

// Loop through the threads
for (Thread thread : threads) {
    try {
        // Wait for this thread to die
        thread.join();
    } catch (InterruptedException e) {
        // Ignore this but print a stack trace
        e.printStackTrace();
    }
}

Finally, you’ll want to time all of this. I do something very simple here. Before all of the code I do this:

long startTime = new Date().getTime();

After all of the code I do this:

long endTime = new Date().getTime();
long durationInMilliseconds = endTime - startTime;

With all of that in place you can now measure how long your code ran and then calculate important metrics about it. For example, if this code did 10,000 operations per thread and ran with 4 threads you would then take the duration and divide that by 40,000 and you’d get an idea of how many milliseconds it took per operation. Just make sure you use doubles or you’ll lose all of your precision due to coercion. Do this (assuming that your number of operations is stored in a variable called “operations”):

double millisecondsPerOperation = (double) durationInMilliseconds / (double) operations;
double operationsPerMillisecond = (double) operations / (double) durationInMilliseconds;

These are just reciprocals of each other but sometimes one value is a lot easier to understand than the other so I usually calculate them both.

Now that you have those statistics you can try different thread counts, optimize code/loops, etc. Good luck! Post in the comments with any ideas and/or issues.

How-To: Fix “Chown: Cannot Dereference” Errors in Cloudera CDH on Debian/Ubuntu Linux When Upgrading

| Comments

WARNING! Do not do this on production clusters unless you are willing to take responsibility for any issues that may occur. This wipes out all of your logs and potentially other files. Always have a backup before trying anything like this. I take no responsibility for issues that may arise from running any or all of these instructions.

When I tried to upgrade my CDH installation today I received many errors from dpkg that caused the upgrade to fail. The errors looked like this:

chown: cannot dereference `/var/log/hadoop-0.20/userlogs/job_201202031049_0008/attempt_201202031049_0008_m_000015_0': No such file or directory
chown: cannot dereference `/var/log/hadoop-0.20/userlogs/job_201202031049_0008/attempt_201202031049_0008_m_000003_0': No such file or directory
chown: cannot dereference `/var/log/hadoop-0.20/userlogs/job_201202031049_0008/attempt_201202031049_0008_m_000009_0': No such file or directory
chown: cannot dereference `/var/log/hadoop-0.20/userlogs/job_201202031049_0008/attempt_201202031049_0008_m_000018_0': No such file or directory

...

dpkg: error processing hadoop-0.20 (--configure):
 subprocess installed post-installation script returned error exit status 123
dpkg: dependency problems prevent configuration of hadoop-0.20-tasktracker:
 hadoop-0.20-tasktracker depends on hadoop-0.20 (= 0.20.2+923.195-1~squeeze-cdh3); however:
  Package hadoop-0.20 is not configured yet.

My simple fix, not for production clusters, is to do the following:

  • Step 1: Become the HDFS user and stop Hadoop by running

    ~/bin/stop-all.sh

  • Step 2: Become root and remove all of your Hadoop related logs by running

    rm -rf /var/log/hadoop-0.20/*

  • Step 3: Become root and run your upgrade by running

    apt-get upgrade

  • Step 4: Become the HDFS user and restart Hadoop by running

    ~/bin/start-all.sh

After that your installation should be working and up to date again. Post in the comments if it works for your or if you need any assistance.

Tip: Trimming the Tops and Bottoms of Text Files With Head and Tail

| Comments

Normally the head and tail applications on Linux are good for what their names imply. head gives you the first few lines of a file, tail gives you last few lines of a file and even lets you watch the end of a file for changes. This is great but what if you want to get an entire file except for the first few or last few lines? It turns out that head and tail have options to do this and it’s incredibly useful for trimming files without knowing exactly how many lines they contain.

I’m writing this because I keep forgetting which one does what. Here’s how you can remember it and use it every day…

Tip #1: If you only want the end of a file use tail like this:

tail -n +3 input.file > output.file

An example file, like a PostgreSQL database dump, might look like this:

column_a | column_b | column_c
---------+----------+---------
    1    |   bob    |  65000
    2    |   joe    |  80000
    3    |   jim    |  54000
(3 rows)

After running

tail -n +3 input.file > output.file

on this we’ll end up with output that looks like this:

    1    |   bob    |  65000
    2    |   joe    |  80000
    3    |   jim    |  54000
(3 rows)

The best way to remember this is that you want everything until the end of the file starting at the third line.

Tip #2: If you only want the beginning of a file use head like this:

head -n -2 input.file > output.file

Using the same example file we end up with:

column_a | column_b | column_c
---------+----------+---------
    1    |   bob    |  65000
    2    |   joe    |  80000
    3    |   jim    |  54000

The best way to remember this is that you want everything from the beginning file excluding the last two lines. Note, there is a blank line after “(3 rows)” and we want to remove that too.

Tip #3: If you need to trim from both side you can pipe like this:

tail -n +3 input.file | head -n -2 > output.file

Using the same example we end up:

    1    |   bob    |  65000
    2    |   joe    |  80000
    3    |   jim    |  54000

This now translates to start at the third line and stop two lines from the end. If you ever forget just come back here and re-read the examples.