Category Archives: IBM i

Adding variables to ad-hoc SQL queries with CL

Some time ago I mentioned using REXX to build and execute ad-hoc SQL queries. For the sake of completeness, here is the same solution implemented in CL using the RUNSQL command.

The program does exactly the same as the previously describe REXX script — extracts all records flagged with today’s date so that they can be reviewed at a later date. And here it is:

pgm
dcl &date6 *char 6
dcl &date8 *char 8
dcl &statement *char 255

/* First retrieve the current system date (QDATE)                             */
/* and convert it to an 8 character value                                     */
RTVSYSVAL  SYSVAL(QDATE) RTNVAR(&DATE6)
CVTDAT     DATE(&DATE6) TOVAR(&DATE8) FROMFMT(*JOB) TOFMT(*YYMD) TOSEP(*NONE)

/* Next, build the SQL statement                                              */
CHGVAR     VAR(&STATEMENT) +
           VALUE('insert into EXPATPAUL1/INPUTDATA' *bcat +
                 'select * from EXPATPAUL1/PRODDATA where dhdate =' *bcat +
                 &date8)
runsql sql(&statement) commit(*none)

endpgm

Flattr this!

Service Programs and Call Stack APIs

I’m a big fan of service programs. From a maintainability point of view, encapsulated procedures are great. And exported procedures, which mean you only need to develop any piece of functionality once, are even better.

However, I now find myself in the position of having to start creating a set of these from scratch (nothing to copy/paste) and, since they can be a bit fiddly, now seems like a good time to document the steps.

In the example that follows, I create a service program with a single procedure, RtvProgram, which returns the name of the program that called the service program. It sounds a bit recursive, I know, but bear with me. Whenever coding a display file (or a report, for that matter) I like to put the program name somewhere on the screen (top left, unless someone tells me otherwise). This means that when someone has a problem, we can very quickly identify what program they are talking about.

Obviously, hard-coding the program name in the display file would be easy. But code gets copied and pasted and, sooner or later, all hard coded values end up being wrong. I have also seen cases of one display file being used by two or more programs, so it is much better to put the program name in a field and retrieve it dynamically. That way, you are always getting the right program.

So on to the service program.

First, you need a copy member to contain the procedure prototype. You could, of course, code the prototype in each program that uses the service program, but that way madness lies.

In my case, I have created the copy member in QRPGLESRC and called it LSS001RP. It looks like this:

 * Retrieve Program Name
d RtvProgram      pr            10a   

And then you need to write the service program:

 * ---------------------------------------------------------------------- *
 * Program     : LSS001R                                                  *
 * Description : Program information service programs                     *
 * ---------------------------------------------------------------------- *
h nomain

 * ---------------------------------------------------------------------- *
 * Exportable Prototypes                                                  *
 * ---------------------------------------------------------------------- *
 /copy LSCLIB/qrpglesrc,lss001rp

 * ---------------------------------------------------------------------- *
 * API Prototypes                                                         *
 * ---------------------------------------------------------------------- *
d RtvCallStack    pr                  extpgm('QWVRCSTK')
d                             2000a
d                               10i 0
d                                8a
d                               56a
d                                8a
d                               15a

 * ---------------------------------------------------------------------- *
 * RtvProgram: Retrieve the program name                                  *
 * ---------------------------------------------------------------------- *
p RtvProgram      b                   export
d RtvProgram      pi            10a

d Var             ds          2000    qualified
d  BytAvl                       10i 0
d  BytRtn                       10i 0
d  Entries                      10i 0
d  Offset                       10i 0
d  Count                        10i 0

d JobID           ds                  qualified
d  QName                        26a   inz('*')
d  IntID                        16a
d  Res3                          2a   inz(*loval)
d  ThreadInd                    10i 0 inz(1)
d  Thread                        8a   inz(*loval)

d Entry           ds                  qualified
d  Length                       10i 0
d  Program                      10a   overlay(Entry: 25)
d  Library                      10a   overlay(Entry: 35)

d VarLength       s             10i 0 inz(%size(Var))
d RcvFormat       s              8a   inz('CSTK0100')
d JobIdFmt        s              8a   inz('JIDF0100')
d ApiError        s             15a
d i               s             10i 0
 /free

     RtvCallStack(Var: VarLength: RcvFormat: JobID: JobIdFmt : ApiError);
     for i = 1 to 2;
         Entry = %subst(Var: Var.Offset + 1);
         Var.Offset += Entry.Length;
     endfor;

     return Entry.Program;

 /end-free
p RtvProgram      e
 * ---------------------------------------------------------------------- * 

I’m not going to go into too much detail here. The service program LSS001R contains one procedure, RtvProgram which uses the QWVRCSTK API to retrieve the current call stack then it reads back two entries: The first entry is the service program and the second entry is the calling program. And this is the program name that it returns.

You now need to create the RPG Module. Note the terminology here — you are not creating a Bound RPG Program (it’s the difference between options 15 and 14 in PDM).

I also need the binding source. In this case, the member is called LSS001S and I have put it in the QSRVSRC source file. It looks like this:

STRPGMEXP  PGMLVL(*CURRENT)
    EXPORT SYMBOL('RTVPROGRAM')
ENDPGMEXP

Note that the capitalisation is actually important here.

And now I’m ready to create the service program:

CRTSRVPGM SRVPGM(LSCLIB/LSS001R) MODULE(LSCLIB/LSS001R) SRCFILE(LSCLIB/QSRVSRC) SRCMBR(LSS001S)

Since I’m doing this from scratch, I need to create a binder directory:

CRTBNDDIR BNDDIR(LSCLIB/LSBNDDIR) TEXT('General purpose binding directory')

And add the service program to it:

ADDBNDDIRE BNDDIR(LSCLIB/LSBNDDIR) OBJ((LSS001R))

And we’re ready to go. All I have to do now is make a couple of amendments to the main program to take advantage of the service program:

The control spec needs this line:

h bnddir('LSBNDDIR')                                                       

Obviously, I need to copy the prototype definition somewhere in the definition specification:

 /copy LSCLIB/qrpglesrc,lss001rp                                       

And when the program starts, I need to identify the name of the program:

 /free

     // Identify the current program
     program = RtvProgram();
                                                                           

And that’s it.

Flattr this!

Using EXTFILE to override files within RPG

So here’s the situation: Five warehouses populating five sets of files (same filenames, different libraries) and I have an RPGLE program that needs to read through each of these to accumulate dispatch information.

The traditional way of doing this would involve writing a CL program to OVRDBF to each file before calling the RPGLE program. This works, but it’s a bit messy and this can cause future maintenance problems. It’s much better, therefore, to specify the file (and library) to open within the RPGLE program itself. You can do this with the EXTFILE keyword.

Here’s an example:

 * ---------------------------------------------------------------------- *
 * Program     : LSX001R                                                  *
 * Description : Dynamically select which file to open                    *
 * ---------------------------------------------------------------------- *
h main(Main)
fMOVEMENTS if   e           k DISK    usropn extfile(filename)

d Main            pr                  extpgm('LSX001R')
d  library                      10a

dfilename         s             21a

 * ------------------------------------------------------------------------
 * Main Procedure
 * ------------------------------------------------------------------------
p Main            b
d Main            pi
d  library                      10a
 /free

     filename = %trim(library) + '/' + 'MOVEMENTS';
     open MOVEMENTS;

     // Do whatever processing on the file needs to be done...

     close MOVEMENTS;

 /end-free
p Main            e

Hopefully this is reasonably straightforward. In the file spec for the MOVEMENTS file, I have used EXTFILE with a variable filename to specify the actual file opened. I have also used the USROPN keyword as I need to populate the filename variable before I attempt to open the file.

Populating this field is pretty simple. I pass the library name to the program as a parameter and the first thing the Main procedure does is concatenate the library and file with a ‘/’ separator.

Now I can open the correct file, do whatever processing is necessary, and then close the file afterwards.

A similar approach, using the EXTMBR keyword can also be used to open a specific file member. You can, of course, combine EXTFILE and EXTMBR in order to dynamically determine both the file and member, if you really want to.

It should be noted that this will only work if you are using traditional database IO. Any embedded SQL will remain unaffected by anything you do in the F-Spec. If you want to do something similar with SQL, you will need to look into the CREATE ALIAS statement.

Flattr this!

Merging data with SQL MERGE

There have been times, many times, when I have needed to update the one or more columns in a table with values retrieved from another. My usual approach to this is to use an UPDATE with subselects but it turns out that there is a better way:

The MERGE statement updates a target (a table or view) using data from a source (result of a table reference). Rows in the target that match the input data may be updated or deleted as specified, and rows that do not exist in the target may be inserted as specified. Updating, deleting, or inserting a row in a view updates, deletes, or inserts the row into the tables on which the view is based if no INSTEAD OF trigger is defined on the view.

In order to play around with this, I build and populated a couple of test tables (imaginatively named TEST01 and TEST02) using the below scripts:

create table test01
 (key_field char(1) not null,
  data01 char(20),
  data02 char(30),
  data03 char(5),
  primary key (key_field));

insert into test01
 (Key_field, data01)
values('A', 'TEXT One');

insert into test01
 (Key_field, data01)
values('B', 'TEXT Two');

insert into test01
 (Key_field, data01)
values('D', 'TEXT Four');
create table test02
 (key_field char(1) not null,
  data01 char(20),
  data02 char(30),
  data03 char(5),
  primary key (key_field));

insert into test02
 (Key_field, data02)
values('A', 'More text one');

insert into test02
 (Key_field, data02, data03)
values('C', 'more text two', 'CODE');

insert into test02
 (Key_field, data01, data02)
values('E', 'TEXT five', 'More text five');

This gives me two tables with one matched row (key_field=’A’). To update the first table with the values in the second is really rather simple:

MERGE INTO test01 a
  USING (SELECT key_field, data01, data02, data03 from test02) b
  ON b.key_field = a.key_field
WHEN MATCHED THEN
   UPDATE SET data02 = b.data02, data03 = b.data03;

And the one matched row is updated in table TEST01 with the data from table TEST02.

If this was all I could do, I would be happy to have replaced a rather clunky approach with a more readable solution. But now the MERGE statement really starts to make a difference:

MERGE INTO test01 a
  USING (SELECT key_field, data01, data02, data03 from test02) b
  ON b.key_field = a.key_field
WHEN MATCHED THEN
   UPDATE SET data02 = b.data02, data03 = b.data03
WHEN NOT MATCHED THEN
   INSERT (key_field, data01, data02, data03)
   VALUES (b.key_field, b.data01, b.data02, b.data03)
NOT ATOMIC
  CONTINUE ON SQLEXCEPTION;

The WHEN NOT MATCHED clause means that not only can I update the matched rows, I can also do something about the rows that exist in table TEST02 that have now matched record in table TEST01. In this example, I have inserted the rows from table TEST02, other options allow me to either delete the row from table TEST01 or raise a SIGNAL statement.

And that NOT ATOMIC CONTINUE ON SQLEXCEPTION specifies that if an error is encountered on the insert, the MERGE will continue. Bad data happens, this saves be the burden of constantly having to refine my selection criteria to avoid the crap.

And a final note on the subject of selection criteria, the WHEN MATCHED and WHEN NOT MATCHED can both be extended with an AND to refine the selection to exactly what you want.

Do go and read up on the MERGE statement. It provides a powerful and flexible approach to updating tables from source data and one that I will be using in the very near future.

Flattr this!

OSPAT: The Open Source POWER Availability Tool

Here’s handy. If you are using Linux on the IBM Power architecture, the Open Source POWER Availability Tool (OSPAT) allows you to search multiple distributions for available packages.

[T]he specialty of OSPAT is that it only returns POWER package results, it does not return x86 packages in the search results. Thus, the user doesn’t have to drill deep into the search results to find the POWER packages. Also, the search results display the results across multiple distributions, helping POWER users determine whether certain packages have equivalency across multiple distributions.

Click here if you want to take a look.

Flattr this!

Happy Birthday, RPG IV

RPGPGM.com notes that 21 years ago today, RPG IV (also known as RPG ILE) was released.

The first version of RPG IV came as part of OS400 V3R1, which was released on November 25 1994. Even though the code was still constrained by columns, the new Definition specifications (D-specs) was introduced, and I could now use variable names that were up to ten characters long. It also made it possible to use Date, Time, and Timestamp data types along with operations codes to be able to easily perform math with them. And I no longer had to use indicators for reads, chains, etc.

This makes me feel a bit old as I remember when RPG ILE was the next big thing. And, without much fanfare, it has continued to evolve — in many ways more rapidly than other, more popular languages. This is both RPG’s greatest strength and its greatest weakness.

IBM have, over the years, put a great deal of effort into ensuring that, for all the new features, backward compatibility has never been broken. This means that if you are running an i on Power, you can upgrade as and when you are ready without having to worry about all of your applications breaking. There are not many reasons for not staying on a supported release, and fewer valid valid ones.

The downside of this is that there is nothing to force developers to take advantage of any of these new features. You could, if you wanted to, write programs in exactly the way you did 30 years ago. They would still compile, still run and still work as expected. And people do.

RPG today is in the somewhat unfortunate position of being a very modern language that is often perceived as being old fashioned. This is because it lets programmers get into a rut and never learn anything new.

I doubt that there is any way of squaring this particular circle. Backward compatibility is important and business critical applications shouldn’t be put at risk. But remember this: If someone claims that RPG is old, obsolete, or worse, the issue to which they are referring is not a technical one.

Flattr this!

Better date conversion with timestamp_format

Some time ago, I discussed the never-ending problem of converting a six digit number into an ISO date. I have since found a better way of doing this and, in order to save myself a search through the IBM Knowledge Center every time I need to do this, I’m putting it up here.

So here’s the problem: I have a table in which the date is represented as a 7 digit number. The first digit is the century (0 indicates 20th century, 1 indicates 21st century) and the next six digits represent the date in YYMMDD format. So 1150831, for example, is 31st August 2015.

The request in hand is for a report, so I want to convert the number into a date so that it looks sensible once exported into Excel. Timestamp_format is your friend.

The TIMESTAMP_FORMAT function returns a timestamp that is based on the interpretation of the input string using the specified format.

It’s that “interpretation of the input string” that makes this so handy. Put simply: you tell the function how to interpret the string and it will do the rest.

Here’s an example. I have a contracts table (CONTRACTS) with three columns: Contract Number (CTNR), Start Date (STDT) and End Date (ENDT). The start and end date are both seven digit numbers as described previously. I need to list all of the currently active contracts (Start Date is before Today and End Date is after Today).

And the SQL looks like this:

select CTNR,
       date(timestamp_format(substr(digits(STDT), 2, 6), 'YYMMDD')),
       date(timestamp_format(substr(digits(ENDT), 2, 6), 'YYMMDD'))
from CONTRACTS
where date(timestamp_format(substr(digits(STDT), 2, 6), 'YYMMDD')) <= current_date
and date(timestamp_format(substr(digits(ENDT), 2, 6), 'YYMMDD')) >= current_date

Current_date is a special register that returns the current date, but that’s not important right now.

The date conversion part involves several nested functions to get to a final date, so here’s the breakdown using a date of 1st September 2015 (1150901):

  • digits(STDT) converts the 7 digit number into a 7 character string. So 1150901 is converted to ‘1150901’.
  • The IBM i can figure out which century it’s in by looking at the year, so I don’t need the century flag and the substr(digits(STDT), 2, 6) strips it out to give me a value of ‘150901’
  • The timestamp_format function takes the date string and uses the format string of ‘YYMMDD’ to generate a timestamp of 2015-09-01-00.00.00.000000. Since I only passed it a date, the hours, minutes and seconds are all zeroes.
  • And finally, I can use the date function to retrieve the date from the generated timestamp.

And here’s the result:

CTNR      DATE      DATE
C1000001  01/09/15  31/08/16

Pretty, and portable.

Flattr this!

Using QShell and CL to work with Stream files in the IFS

It turns out that there is no simple way of generating a list of files currently residing in a folder on the IBM i IFS. Simple, in this case, would be a command like DSPLNK OUTPUT(*FILE). An API does exist, but a combination of not enough time and too much lazy proved to be quite a disincentive for me going down that route.

The issue was that we recieve a number of very big files and, to save a bit of bandwidth, these files were being zipped before being sent to the IBM i. Dropping the file into the IFS and unzipping it was easy enough but then I found myself with an archive folder containing one of more files. While I can set the name of the folder into which the files should be extracted, I have no way of determining beforehand what the file names will be.

Here’s a solution:

/* -------------------------------------------------------------------------- */
/* Program     : EXTRACTZIP                                                   */
/* Description : Retrieve and unpack a zipped archive                         */
/* Written by  : Paul Pritchard                                               */
/* Date        : 27/05/2015                                                   */
/* -------------------------------------------------------------------------- */
pgm
dcl &library *char 10 value('MYLIB')
dcl &fromfile *char 50
dcl &tombr *char 50 value('/qsys.lib/qtemp.lib/IMPORTP.file/IMPORTP.mbr')
dcl &dltfile *char 50
dclf EXTRACTP

/* Retrieve and the zipped file and unzip it.                                 */
/* I won't bore you with the details here, but the the production program is  */
/* retriving the ZIP file from an FTP server and using PKZIP to unzip it.     */
/* The extract directory is /home/EXPATPAUL/EXTRACTFLR which now contains one */
/* or more stream files                                                       */

/* Retrieve a list of extracted files                                         */
/* First, I use QShell to list the files in EXTRACTFLR. The output of this is */
/* redirected to ExtractedFiles.TXT.                                          */
/* In order to use this information, I copy the ExtractedFiles.TXT stream     */
/* to an ad-hoc physical file (QTEMP/EXTRACTP)                                */
qsh cmd('ls /home/EXPATPAUL/EXTRACTFLR/ > /home/EXPATPAUL/ExtractedFiles.TXT')
crtpf file(QTEMP/EXTRACTP) rcdlen(20)
cpyfrmstmf fromstmf('/home/EXPATPAUL/ExtractedFiles.TXT') +
           tombr('/qsys.lib/qtemp.lib/EXTRACTP.file/EXTRACTP.mbr') +
           mbropt(*REPLACE)

/* And now I can use QTEMP/EXTRACTP to drive my way through EXTRACTFLR and    */
/* copy each of the files in the archive into the IMPORTP physical file.      */
dowhile '1'
    rcvf
    monmsg msgid(CPF0864) exec(LEAVE)

    /* Copy the next sream file from the archive                              */
    chgvar &fromfile value('/home/EXPATPAUL/EXTRACTFLR/' *tcat &EXTRACTP)
    cpyfrmstmf fromstmf(&fromfile) tombr(&tombr) mbropt(*add) +
               STMFCCSID(*PCASCII)

    /* and then delete the stream file                                        */
    chgvar &dltfile value('rm /home/EXPATPAUL/EXTRACTFLR/' *tcat &EXTRACTP)
    qsh cmd(&dltfile)

    enddo

/* Clean up and exit                                                          */    
qsh cmd('rm /home/EXPATPAUL/ExtractedFiles.TXT')
dltf qtemp/EXTRACTP

endpgm

It should go without saying that some of the names have been changed and that the above program should be treated as a sample only.

Being able to move information between the QShell/IFS and traditional i5/OS environments is both useful and (in my experience) increasingly important. Although it does take a bit of thinking about, it isn’t difficult which is why I find that the oft-seen solution of “buy this tool” is both disappointing and (often) overkill.

Flattr this!

It is done

I mentioned last month that I was in the process of consolidating the mess of repositories that I have created over the past few years. This is now done and, hopefully, will be a bit more manageable going forward.

I now have three consolidated repositories: silliness, utilities and utils-on-power. The Configurate repository is already logically enough organised and Squirt is large enough to justify its own repo.

I have also removed the static pages referring to these repos on this site. They were creating overhead without adding any value.

And I now know a lot more about Git that I knew this time last month.

Flattr this!