The main goal of the project is to provide a web service for the IBM champions data. But the web service program will not directly access the data. It just translates the HTTP request so that it can access the service layer … service programs. So everything needed to load the data from the database will be handled at one place and will be available to everything capable of using an ILE procedure … which also includes every RPG program regardless of being a batch program or interactive program using a display file to display the data.

Modules

We will create one module per domain which means one module for

  • champion
  • country
  • expertise
  • award

This may look like it is a module per table but that is not the case. F. e. the awards of a champion will be handled by the champion module.

Granularity

Putting all procedures of one domain into a module makes sense and does not need any further explanation. But what about service programs? We will definitely use service programs and don’t use bind by copy. Having service programs is a big advantage when it comes to maintenance of the project.

But we can put all modules into one service program or create a service program per module.

Both ways have their advantages and disadvantages. As this is a rather small project we put all modules into one service program. Each module is also quite small and is only used and usable by this project so making a service program for each is not worth the effort.

The only exception is the country domain. This could also be interesting for other projects and could be put into its own service program.

But for the sake of simplicity we put everything into one service program for this demo project.

Lists of Data

Most of the data we need to handle is just flat data and fits easily in a data structure. The only exception to this are the awards of a champion. A champion can have many awards. We could just use an integer array in the champion data structure and store the reference to the award in it. But then we would have to deal with the problem that we also need to store the information of the number of awards stored in the array. The array has also a fixed size which might not be a problem now but eventually it won’t be large enough at some point.

The solution to the size problem is to use a list “data structure” like a linked list. A linked list can grow and shrink dynamically as needed. As this memory is dynamically allocated it also means that we need to give the memory back to the system after usage. This is a bit inconvenient but the advantage of not having to deal with the size problem outweights it by far in my opinion. So we just write a procedure which takes the champion data structure (in our case) and checks if it has a linked list for the awards. We call this procedure when we don’t need the champion data any more, f. e. after sending it to a client in a web service.

The second problem may not be as apparent as the size problem. But most clients using the service program cannot work with a reference to the award alone. They need the actual data of the award and not just a reference. So we don’t only save the reference in the linked list but also the award data. By putting the award data in the champion data structure the client doesn’t have to make an extra call to resolve an award id to the actual award data.

This will increase the amount of data to be transfered but is easier for the client to work with. We need to evaluate our strategy from case to case. There is no “one size fits all”.

It is also possible to provide both “versions” of champion data, one with the additional award data and one with just the references to the awards. So we could have a normal version and an extended version.

Transactions

We are using transactions when persisting our entities. Some may consider transactions only a necessity if multiple tables are changed in the operation. But consider the following use case:

We need to import a batch of entities from another source like an xml file. We need either to import all or none of the data from the xml file so if it fails we can simply correct the data and restart the process without having to worry about duplicated data in our target tables. If we don’t use transactions the work is not rolled back on failure and we end up with duplicate data. So use transactions even if it is just one table.

But where to execute the commit and rollback? Many put them in their persist/save/store/delete procedure but what if you want to persist or delete a batch of entities? This won’t work if the commit/rollback is executed in the persisting procedure. So …

Who controls the transaction?

It should be executed outside the persisting procedure. And it doesn’t matter if it is executed with ˋexec sqlˋ or as an opcode. The result is the same.

Note: But don’t forget to specify the commit lock level for the module and mention in the documentation that commitment control is used in this module so the user knows that commit or rollback has to be called after using the procedure.

Error Handling

If the classic RLA (Record Level Access) is used for accessing the data all errors occuring due to I/O would automatically result in an escape message. But as we are using SQL for data access this is not the case. But signaling the caller that an error occured by sending an escape message is a good way of handling errors. We will have to check the SQLSTATE variable after each call and send an escape message on error. We will use the MESSAGE service program for that.

In some cases we don’t just want to inform the caller that an error occured but that a specific error or condition occured (like version mismatch or duplicate key). In that case we will send a message from the TOC message file so that the caller can identify the error by checking the message id.