Category Archives: IBM i

It is done

I mentioned last month that I was in the process of consolidating the mess of repositories that I have created over the past few years. This is now done and, hopefully, will be a bit more manageable going forward.

I now have three consolidated repositories: silliness, utilities and utils-on-power. The Configurate repository is already logically enough organised and Squirt is large enough to justify its own repo.

I have also removed the static pages referring to these repos on this site. They were creating overhead without adding any value.

And I now know a lot more about Git that I knew this time last month.

flattr this!

Adding variables to ad-hoc SQL queries with REXX

It’s incredible how easily I can be distracted. All I needed was a quick and dirty way of capturing the input into an interface and now I’m writing a blog post.

Everything starts with an issue and, in this case, the issue is with an interface not behaving as expected in production even though everything works as expected in the test environment. My suspicion is that the incoming data is not correctly formatted, causing transactions to not meet the selection criteria. But to confirm this, I need to see exactly what is waiting to be processed when the interface runs.

Since this interface runs after the end of business I want to be able to submit an SQL query to capture the input data into a separate table so that I can see what was processed when I come in tomorrow morning. And, because much of this data is not destined to pass through the interface in question (this is why we have selection criteria) I want to be able to select today’s transactions for whatever the current value of today happens to be.

In a sane world, this would be a simple case of using the current date but in the real world there are still people using eight digit numbers to represent the date. This leads to some unwieldy date calculations in SQL which led me to wondering whether I could capture this in a variable to make things a bit more readable. It turns out I can, but not on the version of the IBM i operating system that I am currently using.

What really caught my eye at the above link however, was this:

but if “ad hoc sql + variables” is the thing you need, you should really try Rexx.

And I thought: “Rexx. That’s something I haven’t used since last century.”

Any excuse to download a couple of manuals is reasonable as far as I’m concerned, so off I went.

And here’s the script (library and table names have been changed):

datenow = DATE('S')
statement = 'insert into MYLIB/INPUTDATA' ,
            'select * from PRODLIB/INPUTDATA where dhdate =' datenow
address 'EXECSQL'
EXECSQL statement

The DATE function returns the current date and the 'S' option handily reformats it into an eight digit number (yyyymmdd). The address 'EXECSQL' statement handily points the subsequent commands to the SQL command environment and then it’s just a case of executing the statements I need to execute.

It all works very nicely indeed.

On a tangential note, I noticed that Rexx is also available in Arch. I may well install this sometime soon just to have a play around with the language.

flattr this!

IBM i Access Client Solutions… on Arch

This is a follow up to my earlier post about connecting to a cloudy AS/400 (yes, an actual AS/400 running V5R3M0).

As I mentioned at the time, I had run into problems installing iSeries Access because IBM had removed the RPMs from their site, I asked about this and the package maintainer very helpfully provided me with a collection of links to the various versions. However, he also mentioned that the source files were removed from the IBM website because they want everyone to use the IBM i Access Client Solutions. This is in the AUR as iacs, so I thought I’d try this first.

It works… beautifully.

And I do love a snappy application name.

flattr this!

AS/400… In the cloud

I never thought I would be able to use that as a post title but, as reported by The Register, German hosting company Rechenzentrum Kreuznach has popped an AS/400 into the cloud, and anyone can use it for free. I’m anyone so I signed up.

Of course, there isn’t much point in having an account if I don’t have a terminal. Fortunately, Arch has everything.

I first tried tn5250 which proved to be a nice little package that can be started from the terminal. It certainly works and achieves exactly what it attempts. The only problems I encountered were that some of the key mappings were a bit odd (probably as a result of me using the wrong character map) and (more seriously) that running one terminal inside another can cause a little command key confusion.

It was at this point that I noticed that the AUR actually includes iSeries Access. Unfortunately, this is proving to be a bit of a struggle – the package maintainer appears to have assumed that I’ve already downloaded the RPM, which I have been unable to find. I’ve left a comment on the package asking about this and will come back to it if I am able to find the RPM somewhere… anywhere.

(Tangentially: How does IBM manage to continually build such awful websites? Every time I have to negotiate Big Blue’s labyrinthine online presence, I find myself faced with sites that are slow, clunky, painful to navigate and – all too often – completely inconsistent.)

So I turned to the TN5250 Java Edition. Installing and configuring this turned out to be a completely painless process, and I’m in.

To tell the truth, I’m not sure what – if anything – I will do with this. But it’s always fun to poke around an older bit of kit, if only to remind myself how far things have progressed over the past decade.


The title of this post was changed at the request of Source Data who own the trademark for Cloud/400 and have asked me to avoid causing any confusion.

flattr this!

Keeping Qshell sessions alive

Not a lot of people realise this, but the IBM i has a POSIX compliant shell environment, known as Qshell. It’s relatively basic (compared to both BASH and the native i command line) but it can be quite handy when I need (for example) to grep a source file.

One thing that has always annoyed me about Qshell, however, is that it doesn’t retain any history between sessions. Given that my workflow will involve starting at the i command line, performing a task in Qshell, and then returning to the command line, the lack of a history lead either to unnecessary typing or copying and pasting commands into and out of a text editor.

Today I noticed that the F12 key can be used to disconnect a Qshell session without actually ending it. And when I next enter the QSH command, I find myself back in the same session with my history intact.

This isn’t going to help with finding commands I typed yesterday, but it will allow me to avoid unnecessary retyping within the same day.


Why use grep to search a source file rather than the more usual FNDSTRPDM command?

Incompetent contractors is the short answer. Incompetent contractors who introduced an unknown number of divide by zero errors is the slightly longer answer.

In RPG, the division operator is / and the comment symbol is //. I could use FNDSTRPDM to search for all the slashes and then manually scroll past all the comment lines. Or I could shortcut this process with the following piped grep:

grep -in '/' /qsys.lib/sourcelib.lib/qrpglesrc.file/program.mbr | grep -iv '//'

I’m lazy. I grep.

flattr this!

Using SQL to update one file from another

With a recent interface change, I was asked to go back and fix all of the affected historical data so that it matched the new requirements. Updating a field in one file with a value from a different file is something I have done several times in the past (far too many ah-hoc queries have been launched, if truth be told) and, while you do need to take a bit of care, the approach is pretty simple.

So here is an example (some names have been changed to protect the proprietary):

update target_file upd
set target_field = (select source_field from source_file
                    where source_key_1 = substr(upd.target_key, 1, 16)
                    and digits(source_key_2) = substr(upd.target_key, 17, 5) )
where exists (select source_field from source_file
              where source_key_1 = substr(upd.target_key, 1, 16)
              and digits(source_key_2) = substr(upd.target_key, 17, 5) )

The script is pretty simple. For each record in target_file for which an associated record can be found in source_file, populate target_field with the value in source_field. Obviously, the select clauses will need to reflect the relevant keys of whatever files you happen to be using.

Inevitably, there is a gotcha: for each record in target_file that you want to update, there must be exactly one record returned by the subquery. Handling this can be split into two parts.

The first part is handled by the where exists clause which ensures that the script will only attempt to update records in target_file if there is a record in source_file with which to update it. This ensures you don’t get caught out by subqueries that return zero records.

The second part involves that the subquery returns no more than one record for each record in target_file. This, unfortunately, cannot be solved generically – you just need to be a bit careful to ensure that the subquery selection is returning unique records. If in doubt, a variation on the below SQL can be used to validate.

select source_key_1, digits(source_key_2), count(*)
from source_file
group by source_key_1, digits(source_key_2)
having count(*) > 1

If you can’t find a unique selection criteria, the distinct clause may help and, if all else fails, try arbitarily using either max() or min().

flattr this!

Executing SQL statements from within a CL with RUNSQL

Here’s a CL Command I didn’t know about, RUNSQL, which allows an SQL statement to be run from within a CL program without needing a source file.

Inevitably, I found this because I was looking for a quick solution to a transitory problem. I won’t bore you with the details but, what I wanted to do was create a temporary file, populate it with orders for a (user specified) date and then copy this file into a library that has been made available to an external ftp application.

Here is what I ended up with (field, file and library names have been changed for clarity and to protect the innocent):

  pgm &date
  dcl &date *char 8
  dcl &statement *char 150
  dcl &nbrrcd *dec (10 0)
  cpyf fromfile(template/orders) tofile(qtemp/orders) +
       mbropt(*add) crtfile(*yes)
  clrpfm qtemp/orders
  chgvar &statement value('insert into qtemp/orders +
                           select * from archlib/orders  +
                           where msgdat = ' *cat &date)
  runsql sql(&statement) commit(*none)
  rtvmbrd qtemp/orders nbrcurrcd(&nbrrcd)
  if cond(&nbrrcd *gt 0) then(do)
     cpyf fromfile(qtemp/orders) tofile(ftplib/orders) +
          mbropt(*add) crtfile(*yes)
  dltf qtemp/orders

It’s all pretty simple stuff but being able to embed the SQL statement right in the CL makes a conceptually simple solution very simple to implement.

flattr this!

Using the SQL with clause to group by calculated values

So here’s a question that cropped up recently: In an SQL Select statement, can you group by a calculated field (scalar function)?

The short answer is: No.

The longer answer is: Of course you can.

It did occur to me, last week, to simply post the script that I used to achieve this but there are a number of other things going on in there and I suspect that the unannotated code would be more confusing than clarifying. It is a useful, technique, however and one that I can see myself using again, so here is a simplified version of the problem along with the solution (file1 and field names have been changed for the sake of clarity).

So here’s the question: Can you provide a report showing the latest invoice number and invoice date for each customer (some selection criteria may apply)?

Since we have a sales file for which the sh_invoice_number field is updated when the transaction is invoiced, the first pass of this report is pretty simple:

select sh_sales_customer, max(sh_invoice_number), max(sh_invoice_date)
from sales_history
group by sh_sales_customer

Now for the wrinkle.

The sh_sales_customer field is the delivery point for the sales transaction. In many case, this is the same as the invoice account but there are cases where one invoice account encompasses several delivery points. Finance folk don’t care where goods are shipped, they just want to know where to send the invoice.

We have a file, account_numbers which maps the delivery point to the invoice account, if necessary. So the script needs to be changed so that it checks this file and uses either the retrieved value or the sales_customer, depending on the results of the lookup. The With clause is your friend.

You can’t Group By the calculated field, but you can use the With clause to factor the sh_sales_customer calculation into a subquery. And this is what we end up with:

 customer_accounts as 
   (select dp_delivery_point as ca_delivery_point,
              when an_invoice_account is not null then an_invoice_account 
              else dp_delivery_point
              end as ca_customer
    from delivery_points
    left outer join account_numbers on an_delivery_point = dp_delivery_point)
select ca_customer, max(sh_invoice_number), max(sh_invoice_date)
from sales_history
join customer_accounts on ca_delivery_point = sh_sales_customer
group by ca_customer

The delivery_points file contains all delivery points along with shipping information.


1 In case any of you youngsters are feeling confused by the terminology: a file (in this context) is a table, a record is a row and a field is a column. I am aware that I have the rather bad habit of using these terms interchangably, but I’m too lazy to go back and reword this post.


The field and file names should be reasonably self explanatory, but if you are struggling with what belongs where, here are some definitions:

SALES_HISTORY         SH_SALES_CUSTOMER     CHAR                   8                -
SALES_HISTORY         SH_INVOICE_NUMBER     INTEGER                4               0 
SALES_HISTORY         SH_INVOICE_DATE       DATE                   4                -

DELIVERY_POINTS       DP_DELIVERY_POINT     CHAR                   8                -

ACCOUNT_NUMBERS       AN_INVOICE_ACCOUNT    CHAR                   8                -
ACCOUNT_NUMBERS       AN_DELIVERY_POINT     CHAR                   8                -

The real files contain much more information, of course, but I have stripped these out in order to keep things reasonably simple.

flattr this!

Using an SQL subselect to insert records into a table: An example

The following bit of code should be reasonably self-explanatory but I was a little surprised to discover that there are no examples of doing this (for IBM i folks, at least) on the interwebnet.

So now there’s one.

insert into ignoreiln                                               
    slsrpt as (select * from ignoreiln where mtyp = 'SLSRPT'), 
    invrpt as (select * from ignoreiln where mtyp = 'INVRPT') 
  select a.mdir, 'INVRPT', a.siln, a.diln, a.riln, 
         a.dept, a.brnd, 'Auto Created' 
  from slsrpt a    
  left outer join invrpt b on b.mdir = a.mdir 
       and b.siln = a.siln and b.diln = a.diln 
       and b.riln = a.riln and b.dept = a.dept 
       and b.brnd = a.brnd 
  where b.siln is null

If you want to know more, I would recommend starting with the IBM i Information Centre.

flattr this!