Thursday, May 27, 2010

Testing Akka Remote Actor using Serializable.Protobuf

Just a short post to show how you can test a remote actor, where the messages are serialized using Google protobuf. Protobuf is automatically used for the internal akka protocols (to make remote actors possible for instance), but the messages that you define yourself are not automatically encoded using protobuf of course, (you would need to do quite a bit of reflection and include the protoc compiler in the framework, and some translation or mapping for that to automagically work, that's a path down the rabbit hole I'm glad the akka team didn't take).

By default java serialization is used. There is a setting in akka.conf for serialization, but that's only for the cluster protocol. I'm using akka 0.8 by the way.

Let's get back to the task at hand.

First you need to install protobuf 2.2.0. (check here). 2.3.0 gave me some issues, the generated java classes did not compile. (and akka 0.8 ships with the 2.2.0 jar, so you might be asking for trouble with 2.3.0)

Create a .proto file that describes the message (in my case I named the file ProtobufCommandPojo.proto):

package unit.test.proto;

message workercommand {
required uint64 id = 1;
required string name = 2;
required string data = 3;
}

Then generate the java code for the protobuf message (in my case to a java directory):


$ protoc ProtobufCommandPojo.proto --java_out ../../java

This generates quite a few java classes, mainly ProtobufCommandPojo.

Create a message case class that mixes in the Serializable.Protobuf trait:
package unit

import se.scalablesolutions.akka.serialization.Serializable
import test.proto.ProtobufCommandPojo

/**
 * message class that uses Serializable.Protobuf trait.
 * uses generated protobuf classes.
 */
case class ProtobufMixedIn(id: Long, name :String, data :String) extends Serializable.Protobuf[ProtobufMixedIn] {

     def getMessage : com.google.protobuf.Message = {
       ProtobufCommandPojo.workercommand.getDefaultInstance().toBuilder().setId(id).setData(data).setName(name).build()
     }
}

As you can see you use the generated class to do the actual protobuf message building. this is basically the mapping between protobuf and the message class.
Note: the Serializable.Protobuf class has a default implementation for fromBytes, which assumes that you do a standard mapping between your class and the protobuf message definition:
def fromBytes(bytes: Array[Byte]): T = getMessage.toBuilder.mergeFrom(bytes).asInstanceOf[T]

I'm expecting that if you build a protobuf message in a different way than is 'expected' in the getMessage method, you might not get the result you want. I'm guessing that there is some reflection or naming convention going on under the hood to make this default implementation work, I'll leave that investigation to you if you feel like you want to know how this works exactly.

And then the unit test. You simply work with the case class ProtobufMixedIn, as you would with a normal message. Akka does the rest.

package unit

import se.scalablesolutions.akka.actor.Actor
import org.scalatest.{BeforeAndAfterAll, Spec}
import org.junit.runner.RunWith
import org.scalatest.junit.JUnitRunner
import se.scalablesolutions.akka.remote.{RemoteClient, RemoteServer}
import org.scalatest.matchers.{MustMatchers, ShouldMatchers}

/**
 * Test to check if communicating with protobuf serialized messages works.
 */
@RunWith(classOf[JUnitRunner])
class ProtobufTest extends Spec with MustMatchers {
  describe("Send using Protobuf protocol") {

    it("should receive local protobuf pojo and reply") {
      // send a local msg, check if everything is ok
      val actor = new TestProtobufWorker()
      actor.start
      val msg = new ProtobufMixedIn(1, "my-name-1", "my-data-1")
      val result: Option[ProtobufMixedIn] = actor !! msg
      result match {
        case Some(reply: ProtobufMixedIn) => {
          // test actor changes name to uppercase
          reply.name must be === "MY-NAME-1"
        }
        case None => fail("no response")
      }
      actor.stop
    }

    it("should receive remote protobuf pojo and reply") {
      //start a remote server
      val server = new RemoteServer()
      val actor = new TestProtobufWorker()
      server.start("127.0.0.1", 8091)
      //register the actor that can be remotely accessed
      server.register("protobuftest", actor)
      val msg = new ProtobufMixedIn(2, "my-name-2", "my-data-2")

      val result: Option[ProtobufMixedIn] = RemoteClient.actorFor("protobuftest", "127.0.0.1", 8091) !! msg

      result match {
        case Some(reply: ProtobufMixedIn) => {
          // test actor changes name to uppercase
          reply.name must be === "MY-NAME-2"
        }
        case None => fail("no response")
      }
      //shutdown server and client processes
      server.shutdown
      RemoteClient.shutdownAll
    }
  }
}

/**
 * Actor that sends back the message uppercased.
 */
class TestProtobufWorker extends Actor {
  def receive = {
    case msg: ProtobufMixedIn => {
      log.info("received protobuf command pojo:" + msg.id)
      val r = new ProtobufMixedIn(2, msg.name.toUpperCase, msg.data.toUpperCase)
      log.info("sending back a renamed pojo:" + r.name)
      reply(r)
    }
  }
}

5 comments:

  1. Do you really manage to test your actors like this? I cannot manage to make a test fail inside actors because of the faulthandling that is catching all exceptions :/

    ReplyDelete
  2. Sure, I manage to test in this way. You can use the supervisor hierarchies to handle exceptions, and then check if your actor was restarted. I don't test that way though. I test on the fact if I get a certain response or not, or if a message was sent to an actor later on, or not. It's all about the message passing, and that is what I test. An exception is not something that really fits in that model, it's quite different from calling methods. I never explicitly throw exceptions from actor code. The "let it crash" approach works a bit different. I view exceptions inside an Actor as a reason for the Actor to be restarted.

    ReplyDelete
  3. Thanks for taking the time to write this up -- it's the clearest info I've found so far on using Protobuf (about which I know little) to serialize Akka messages to remote actors. Sounds like you'd have to do some SBT work to get the build to run protoc as needed; do you know of a plugin that helps with this, or has this improved since 2010?

    And BTW, I just bought your book!

    ReplyDelete
  4. Hi Nicholas,

    Cool, I'm surprised everything still works as in the 2010 post.. I haven't used this for a while and at the time I think we just called protoc from sbt 0.7 using a java ProcessBuilder (calling a shell).
    And I think we just committed the generated java protobuf classes, our protocol did not change much, and if it did we would generate it by hand and commit again.

    I'm not sure what the status is of https://github.com/sbt/sbt-protobuf but you could give it a try.

    Thanks for supporting the book!

    ReplyDelete
  5. I guess it is about time to update these posts... it's already 3 years later ;-)

    ReplyDelete