From ago control wiki
Jump to: navigation, search

Random ideas on what I'd like to do fix/implement/work on with Agocontrol



  • Targets: C++ agoclient, core, python agoclient and finally devices
  • Goal: Replace current mix of printf/fprintf/cout/cerr with proper logging infrastructure
  • Why: Proper logging aids very much in debugging problems.
  • Minimum requirements: multiple levels (error, warning, info, debug, trace). configurable output (stdout or file), threadsafe
  • Good to have: configurable levels per logger, for example TRACE log in app, but only DEBUG in agoclient.


There are no "standard" logging library in C++. There are however lot's of options.

It would be nice to let agoclient log to some kind of generic interface, which the end user application can hook into and get log messages delivered. The core apps (and C++ devices under our control) should use reusable init code for setting up logging targets.

Boost.Log Pros:

  • Very flexible, supports lots of features. End-user app can hook in and add whatever configurations they'd like.
  • We're already depending on boost, thus no extra dependency


  • Introduced into Boost 1.54. Debian wheezy uses 1.49, so no go. Could possibly write a wrapper if < 1.54, if we limit the used features.

glog (google-log)

log4cxx Cons:

  • No updates since 2008?

pantheios Pros:

  • Claims to be superduper fast


  • Seems unmaintained
  • No support for different loggers?
  • Seems verbose

eayslogging++ Pros:

  • Header only
  • Maintaned
  • TIMED_SCOPE, nice!



I see no reason to not use the standard python logging library.

What's required is some common, reusable code for configuring logging. This should not be forced upon the devices though, they might have their own logging preferences.

"App base"

Agocontrol is built on multiple applications. Currently, they all have their main() and inits an AgoClient. Although not very much code, with additional features such as log configuration and command line options, it would be nice to have a simple class for building the core apps. This would also catch unhandled exceptions, try to log a semi-nice error (so you know at least when it crashed), and maybe also print a stack-trace so GDB isn't required. Still wan't to have a core-dump though, for proper debugging..

This is both for python and C++. Some pseudo code:

   abstract class AgoApp:
       constructor(appName, argc, argv);
           this,.argc, argc = argc,argv
 "Starting " + appName);
           try { 

   class AgoRPC extends AgoApp:
       constructor(argc, argv)
           super.construcotr("agorpc", argc, argv)
           .....agorpc main loop.......
   void main(argv, argc){
       return AgoRPC(argv, argc).run();

Catching exceptions

In C++, it's not possible to reliably catch an exception, and then rethrow it, and still having the original stack trace available for GDB.. And there are no super-portable ways to properly print stack trace.. Question is, is it really useful to get a stacktrace without a core dump anyway?

Something like this would work though, to get at least a notification:

  int agomain(...) {
    /* app main routine */
  void signalHandler(int sig, struct __siginfo *siginfo, void *secret) {
     std::cerr << timestamp() << " Something broke really bad, crashing.. " << std::endl;
  int main(int argc, char **argv) {
     struct sigaction sa;
     sa.sa_sigaction = &signalHandler;
     sa.sa_flags = SA_RESTART | SA_SIGINFO | SA_ONSTACK;
     sigaction(SIGSEGV, &sa, NULL);
     sigaction(SIGABRT, &sa, NULL);
     sigaction(SIGFPE, &sa, NULL);
     return agomain(argc, argv);

With this we still get a proper core dump at least, with the bonus that we log when the problem happens.

Make agoclient asynchronous

Today we have one thread handling incoming messages, and then an app thread is required for sending message. While waiting for response, this thread blocks. This is mostly visible in agorpc, where we by default start 50 threads to run an otherwise fully asynchronous web server.

I'd like to have AgoConnection::run() handle all fetch'es, using nextReceiver API ( A new sendMessageReplyAsync would be added, which in addition to the other parameters, takes a callback. It registers the new receiver with the main thread, and then returns. the callback is triggered either on response, or on timeout. The old sendMessageReply call will remain.

The main reason for this is to cut down on bloat, i.e. have agorpc using one single thread to do all processing.

Could optionally make use of boost::asio here, but might be uneccessary and tricky to integrate against.


Make my pyowmaster project a first-class agocontrol device. Possibly replace agowfs in the future.

Today, I've manually configured some DS2406 input buttons to use messagesend:

   ch.A = input active high uuid=592f3b68-1234-123b-123b-123412341234 command=on


Clean up JS, modularize

The current JS setup loads everything into the global namespace. Each page defines an arbitrary init function, which sets a global "model" object. There are some downsides with this setup:

  • app.js has a map between "page name" and filename to load
  • app.js has a map between "page name" and the init function to call
  • It is not possible to (cleanly) change from one page to another, since "model" var is global.

In addition, app.js has a lot mixed code, data model, presentation and UI "routing".

It would be nice if we could clean up each "page" module to use a proper module loading system, such as AMD or RequireJS. Would also aid in minifying a production build. Each page module would ideally expose a common init function, and would hold the model to itself.

Current mix of querystring and (for floorplan only) manually history.state-based, should be replaced with a proper routing system such as Sammy.js ( With these changes, it would be possible to make the UI a true Single Page Application without any nasty reloads which we have now (which are totally unecessary, since they just reload the same static HTML and basic JS files + a special page-JS).

RequireJS ideas

Recommended reading:

Using RequireJS would allow us for a modular approach, where each page would be a module of its own. A module means one file of code. In dev mode, each file is loaded dynamically in the browser, but for "production installations" (i.e. when installed via package), a precompiled JS file with all "common" modules would be used. This means that for regular usage, only a single JS file would be loaded. Modularizing code also encourages proper decoupling of features with well-defined interfaces. This is good if we want to allow external usage of the JS, for example we should write the backend pieces fully decoupled for the presentation and presentation logic, to allow other apps to use the JS backend pieces.

To convert the existing code-base to RequireJS modules, each file needs to be wrapped with:

  define(['requirementA', 'requrementB', function(A, B) { /* actual module code */ });

The main HTML would have something like:

       <script data-main="js/main" src="js/libs/require.js"></script>

And a new js/main.js file would contain:

   // usually some mapping required here, for non-requirejs-compat libraries
 // Start the main app logic.
 requirejs(['ui'], function($,ui) {


 define(['jquery', 'backend'], function($, agoBackend) {
   // define main UI stuff
   return { init: function(){
       agoBackend.subscribe(function fn(event){ do-something... } );
   } };


 define(['jquery'], function($) {
   // define a module which has the pure data API for all AGO API stuff
   return {
     /* Register a callback which is called whenever we get an event from the backend */
     subscribe: function(cb){... },
     // handle subscribe/getevent/unsubscribe calls internally and feed to all subscribed listeners.
Personal tools