Testing Sanity

The core task of software development is sanity testing.

  • warning: Parameter 2 to gmap_gmap() expected to be a reference, value given in /home2/thepalls/public_html/cgpsoftware/includes/module.inc on line 483.
  • strict warning: Non-static method view::load() should not be called statically in /home2/thepalls/public_html/cgpsoftware/sites/all/modules/views/views.module on line 879.

cgp's blog

Posted by cgp

A lot of shops use the apache webserver in front of their tomcat servers to serve up their webservers. They use them essentially as proxies so that when the container server is taken down, the system can display a maintenance message or redirect to a web server which is up.

But, if you have something else doing this job, and have chosen to run just tomcat, how can you recreate those really nice apache logs to run analysis software like awstats against?

Well, actually, it's incredibly simple, and a quick look at the default configuration provides pretty much everything a server needs:

Just remember that the logger has attributes/options not shown on the valve documentation (but referred to), to control naming of the file logs.


You can't specify an absolute path in the directory attribute, so to get to the root or another path, you're going to have to go: "../../../../" until you get there.


  • Valve Reference Documentation
  • File Logging (controls naming of access logs)

  • Log4J

    18 Aug 2008
    Posted by cgp

    Everyone seems to use log4j. The configuration always seemed a bit cryptic to me.

    For instance, when I had the following entry:

    log4j.rootLogger=debug, stdout, daily

    Nothing logged to the debug "logger"! It had to come after stdout in the list as such:

    log4j.rootLogger=warn, stdout, daily, debug

    The reason for this is simple! The first parameter in this list specifies the logging threshhold for each of the appenders attached to the rootLogger. If you specify a valid value ("debug", "info", "warn", "error", "fatal"), it uses that as a threshhold. If you do not specify a valid threshhold value, it assumes a default (I'm not sure what the default is)

    So, specifying the names stdout/daily/debug for log4j.rootLogger creates appenders by that name under the rootLogger. Options are set for each appender as such:


    The root of those entry declares the class to be used (each has a different purpose):


    And then, specify settings of each of those is just a matter of knowing the options available to the class:

    log4j.appender.debug.layout.ConversionPattern=%d{h:mm:ssa} %5p (%F:%L) - %m%n

    Pretty nifty. You can use XML to setup these configs too, to make it clear which appender owns each setting.

    One other thing, in addition to being able to specify a logger under log4j.rootLogger, you can specify a logger like so:


    This will set the default threshhold and also specify the name of the appender (the logger).

    The advantage of this being, that you can configure it to take only a particular subset of log messages (such as for EJB)

    Configuration remains the same.


    Additivity sets whether logs which went to that particular log will appear in the root log:


    So, the only thing that confuses me here is, being that this is not set against any particular class, doesn't this mean ALL logs will not be sent to the root logger?

    This seems like it would work, and make more sense:
    You can also set a logger for a particular class, simply by specifying the class after logger:


    Now logs from com.cgpsoftware should appear in the appender log from this entry, but not in the root log (according to what I understand.


    Posted by cgp

    I'm not certain why, but I was getting: could not reassociate uninitialized transient collection for the longest of times when trying to merge my hibernate object into the database. The odd thing is, it would work in one set of unit tests but not another.

    The thing that made it work?

    Setting my Fetch types from lazy to eager. Of course, this makes my queries run nice and slow as it is fetching all of the subrecords.

    I found this discussion to be interesting, but not necessarily helpful.