Building a Modern Line-Of-Business Application:Part 2

In last issue's overview, we discussed the lack of major changes in LOBs (Line-Of-Business) applications despite the wealth of changes in consumer application design, hardware interfaces, and user expectations. In this article we'll talk a bit more about what goes into creating a modern Line-Of-Business application from scratch.

Logging

The problem with logging is that it is almost always added as an afterthought, not as part of the core structure of the application. It shouldn't be. Logging is essential. It doesn't matter if all you are building is a mobile app that displays pretty pictures, a consumer X-Box game, or a Line-Of-Business application. There are times when you must have additional information to help find issues when the users start calling about a problem.

In our area of focus, Line-Of-Business applications, logging is even more important because of the complexity of the application and data. Line-Of-Business applications require different types of logs for different purposes:

  1. System Errors/Information
  2. User and Application
  3. Security
  4. Change

Each of these types requires different pieces of information to be stored. Likewise, they need different notification options during updates. For anyone that's written log files before, you know that the process generates massive amounts of rarely-used data. When files are needed, it is very important to have all the clues, or at least the details needed to recreate the original data if something is corrupted.

System Error/Information Logs

This type of log is often over-used as a catch-all. The piling-on of anything that isn't specific to other logs can hide the important parts under mountains of miscellany. Part of the problem is that people try to use System Error / Information Logs for debugging. They are better when used for tracking specific events that are happening in the application. i.e.: Backup Started, Resize Done, Low Memory, Low Disk, Background Process Started, Workflow Engine Failed .

System Logs should have the following structure to handle the right type of information:

ID - Unique Record ID

<1> - Date

<2> - Time

<3> - Port or PID

<4> - Account/Database ID

<5> - User ID

<6> - Process/Program creating the message. This is used to help track all messages being generated by a specific process. For example: a backup or file-resize process.

<7,n> - Error Code(s). Error Codes that represent standard, human-readable, short messages. This is similar to the use of 201 or 404 errors used with the MultiValue ABORT command.

The error code should contain more information than just an error number. For example, if this log contained the "201 - file not found" error message, then the error code may look something like this:

201~CUSTOMER

Or if you were recording missing record IDs with the "202 - item not found" error message, then we'd include the file name (CUSTOMER) , the ID that was missing (1234) , and the associated file/record (ORDER and 554-1) that was connected to this missing data:

202~CUSTOMER~1234~ORDER~554-1

<8> - Human Readable Short Message. This is a short summary that is human readable to explain the error. Try to keep it to one line of information

<9,n> - Human Readable Long Message. This is a detailed summary of the error, and/or how to go about fixing the error. There may not always be information here, but this is so a human knows where to go and what to do if this error shows up and something needs to be done.

Example: DAT Tape Read Only. Eject the DAT tape from drive, and check to make sure the Read-Only switch is not set. Reinsert the tape, and go to Menu options 1, 2, then 5 to restart the backup.

<10,n> - Stack/Trace information. If this is a major error, then there should be tracing information. This information is all about how the process got to the point where we needed to log the error. It may be the subroutine CALL stack, or it may be a programmer's description of the steps that got to this point.

User and Applications Logs

This is where the debugging information belongs. The when, where, and why belong here. User and Applications logs need to be able to be turned on and off at will, and should be considered transitory, not persistent. Since these are transitory logs, they do not need to be saved in backups, and should be cleared on a regular basis just due to the amount of space they use. Most application developers create these logs in their client applications but neglect the server side user and application logs. We need them in both places.

User and applications logs should have the following structure to handle this type of information:

ID - Sequential ID :"*": User ID :"*": Application ID

It is important that we keep the order in which a message is logged since that might be important when tracing information.

<1> - Date

<2> - Time

<3> - Port or PID

<4> - Process/Program creating the message. This is used to help track all messages being generated by a specific process. For example, a backup or file resize process.

<5> - Log Message. Since these logs are about tracing what the user is doing, this message may contain anything that the developer deems important. This information is designed to be used for structured error messages. That is what the System logs are for.

<6,n> - Additional Log IDs. This would be like a system log, audit log, and log record ID that would contain more detailed structured information about the Log message. Example: SYSTEM.LOG*5514585-588ASS5-55555

<7,n> - Stack/Trace information. If this is a major error, then there should be tracing information. This information is all about how the process got to the point where we needed to log the error. It may be the subroutine CALL stack, or it may be a programmer's description of the steps that got to this point.

Security Logs

Security logs are exactly what you think they are. Tracking data on security and user access both belong here. They should be persistent, to a point. The key idea is to provide a single place to look for user accesses and overrides. This is where you go to investigate if the user is following policies or doing something else.

Security logs should have the following structure:

ID - Unique ID

<1> - Date

<2> - Time

<3> - Port or PID

<4> - Account/Database ID

<5> - User ID

<6> - Process/Program creating the message. This is used to help track all messages being generated by a specific process. For example, a backup or file resize process.

<7> - Security Log Type: SUCCESS, FAILURE, OVERRIDE SUCCESSFUL, OVERRIDE FAILED LOGOUT, CHANGED PASSWORD, etc.

<8,n> - Security Trace Value Name

<9,n> - Security Trace Value

An example of using the Security Trace Value and Value Name would be on a price override. The information I would put into this these fields would be the record that the override was associated with, the amount that was to be overridden, and the new amount that it is becoming.

FILENAME]ID]ORIG.AMT]NEW.AMT

ORDER]1234]130]250

Change Logs

Change logs are sometimes confused with audit logs since most people are trying to track what data has changed and when, based on the user. That is half the purpose of a change log. It has to go beyond that and give us what we need to backtrack, and possibly reverse, the changes.

Change logs are usually associated with file triggers, but can be managed by an application directly as well. Change logs should be transitory. They are not required to live for extended periods of time. If you need to keep this information longer, then your developers should be creating an archive file, not a change log.

Change logs should have the following structure:

ID - Unique ID

<1> - Date

<2> - Time

<3> - Port or PID

<4> - Account/Database ID

<5> - User ID

<6> - Process/Program creating the message. This is used to help track all messages begin generated by a specific process.

<7> - Record ID

<8> - File Name

<9,n> - Change Action: UPDATE, INSERT, DELETE

<10,n> - Field Position Changed. AMC or AMC,VMC

<11,n> - Old Value

<12,n> - New Value

<13,n> - Stack/Trace information. This should be the subroutine CALL stack, if available. At least, it should be information on how and where the information changed from.

Logging Notifications

While writing all these files is important, they lose much of their effectiveness if you aren't notified that the log has been updated. Most of what goes into the records can wait but there are times when developers and admins need to be notified when a log is updated.

As part of the overall workflow, a process needs to exist to watch for specific log messages and send that information to a notification processes. Borrowing from the SNMP logging protocol, each log type needs to have a hook (a place where subroutines CALLs can be added). These subroutines would do the work of deciding what details need to be sent, and which notification option should be used.

Some of the notification options that should be available to your application:

  • Nightly Reports
  • Email
  • SMS Text
  • IM (Instant Message) Text
  • SNMP
  • Syslog
  • Windows 10 Push Notification
  • Mobile App Push Notification

Unique Record ID Management

Since MultiValue databases leave the record ID management up to the software developer, this must be managed by the framework. Other database applications have a few record ID management features built-in due to how the databases are required to store information. In this case, I'm going to assume we will want to manage this information ourselves. Since I am reinventing the wheel, it seems reasonable.

Sequential IDs

The most common type of ID management that most people develop is the sequential ID. This is the easiest way to create a unique ID for every record, but sometimes generating a sequential ID can cause deadlocks that slow down other applications that are also using the sequential IDs.

Pros:

  • Always unique
  • You will always know exactly what the next ID will be

Cons:

  • Record locks (deadlock) can occur if two processes are trying to increment the ID at the same time
  • Additional disk processing when read/writing the sequential counter

GUID/UUID

GUID and UUID are basically the same. The only difference is that UUID is the open source way of creating a unique ID, and GUID is the Microsoft way. They both do the same thing, making sure the ID is unique based on the machine you are creating the ID on.

If you have never seen a UUID, it looks like the following:

30dd879c-ee2f-11db-8314-0800200c9a66

GUID are actually really large numbers, but they are represented in Hex to make them easier to read. GUID and UUID are calculated, so there is no reason to store a counter anywhere, but they are unique across applications and often even across hardware platforms. This makes them ideal for creating IDs when you have a federated application with a central office holding all the information, and remote store or offices holding only their information.

Pros:

  • Always Unique. Even across different databases
  • Calculated Value; so no additional disk overhead or locks

Cons:

  • Calculated Value; May take up more processing depending on how the ID is generated
  • Won't know what the next ID is, as it is always unique

Date/Time IDs

Date/time IDs are great for logs but do sometimes require additional processing to make sure they are unique. Date/time IDs are exactly what is being described; an ID that is made up of a the current date and time.

Many times these IDs also include Port or PID numbers to offer additional uniqueness, and to prevent record collisions.

A common structure is:

DATE*TIME*PID

The main drawback to date/time IDs is the millisecond factor. You need to include something to handle the millisecond factor so your applications don't have a record ID collision.

Most of the MulitValue Databases have a way to handle the millisecond factor, but if your system doesn't, then you need to do the additional tests to make sure that the ID doesn't already exist on file.

Pros:

  • Mostly unique, but has issues with millisecond factors
  • Calculated value; so no additional disk overhead or locks

Cons:

  • Additional checks may be required so you don't have record ID collisions
  • Must handle the millisecond factor, and must include a PID to keep uniqueness

Conclusions

While I've only talked about logging and record ID management, they are key framework components that will be used throughout the rest of the project. Logging is not something that should be left for last, but part of the core framework.

Stay tuned for the next part of this series in the next issue.

Nathan Rector

Nathan Rector, President of International Spectrum, has been in the MultiValue marketplace as a consultant, author, and presenter since 1992. As a consultant, Nathan specialized in integrating MultiValue applications with other devices and non-MultiValue data, structures, and applications into existing MultiValue databases. During that time, Nathan worked with PDA, Mobile Device, Handheld scanners, POS, and other manufacturing and distribution interfaces.

In 2006, Nathan purchased International Spectrum Magazine and Conference and has been working with the MultiValue Community to expand its reach into current technologies and markets. During this time he has been providing mentorship training to people converting Console Applications (Green Screen/Text Driven) to GUI (Graphical User Interfaces), Mobile, and Web. He has also been working with new developers to the MultiValue Marketplace to train them in how MultiValue works and acts, as well as how it differs from the traditional Relational Database Model (SQL).

View more articles

Featured:

Mar/Apr 2016

menu
menu