Q L   H A C K E R ' S   J O U R N A L
      ===========================================
           Supporting  All  QL  Programmers
      ===========================================
         #31                       June 1999 
      
    The QL Hacker's Journal (QHJ) is published by Tim
Swenson as a service to the QL Community.  The QHJ is
freely distributable.  Past issues are available on disk,
via e-mail, or via the Anon-FTP server, garbo.uwasa.fi. 
The QHJ is always on the look out for article submissions.

        QL Hacker's Journal
     c/o Tim Swenson
     2455 Medallion Dr. 
     Union City, CA 94587
     swensont@geocities.com  swensont@mail.sns.com
     http://www.geocities.com/SiliconValley/Pines/5865/index.html

 
 
EDITORS' FORUMN 
 
This issue is far later that I would like.  Planning for the 
West Coast Sinclair Show (for which this issue is being 
prepared) took a fair amount of time.  I've also been 
fiddling around the house getting it ready for guests coming 
to the show.  I've also been distracted by some projects 
from work.  I'm looking foward to running Linux on my Q40 
and doing some "Professional" stuff with it. 
 
Well, after about a year, the Qliberator Source Book project 
has reached a point that I have enough material to release. 
Since Qliberator was only a small part of the main document, 
I changed the title to the SuperBasic Source Book.  The 
focus was on Qliberator, Programming Toolkits, and 
Programming Tools.  There is no sections on the SuperBasic 
langauge itself.  The emphasis was on what is necessary to 
produce compiled professional code.  I plan to add more 
information to the Source Book as I find the time.  A number 
of sections came from the QHJ, but were expanded with more 
information.  The whole document (along with many others) 
can be found on my web page: 
 
    www.geocites.com/SiliconValley/Pines/5865/ 
 
Last issue covered two different languages, Perl and AWK. 
This started me thinking about other languages that are 
available on the QL and who, if any, uses them.  A number of 
people have ported different languages to the QL, from the 
then popular XLISP, to the now popular Perl.  Each language 
has it's own features and reasons for being.  What I want to 
know is, who uses these languages?  Has anybody done 
anything useful with these languages?  Do you have a 
favorite language that you like to program in?  If you use a 
language on the QL other than Assembly, C, SuperBasic, or 
Perl (we just touched it), let me know.  Tell me what 
language you use, what you use it for, how it suits your 
needs, and provide an example of a usefull tool you've 
created with the language. 
 
 
 
 
 
STRUCTURED SUPERBASIC 2.6.1 
 
One project keeping me away from working on the QHJ was 
updating Structured SuperBasic.  I've made a few minor 
changes to it that allows it to much more usefull.  I've 
also added a new utility that makes it SSB production 
easier. 
 
The two changes are: 
 
1 - Added a second command line argument of Starting Line 
Number.  This was made so that SSB would be used with the 
Unix utility 'make'.  Make, which comes with the C68 
distribution, is a tool designed to only compile those 
sections of code that have changed.  With SSB, if you have a 
program comprised of 5 program files, and you only change 1 
one them, you have to run SSB on the whole lot (if you've 
used #include statememts).  Using 'make', you leave out the 
#include statements and let make run the SSB filter for you. 
If you have only changed 1 file, the only that file is 
converted from _SSB to _BAS, then all of the _BAS files are 
added together into one file (using 'cat', another Unix 
utility). 
 
I'm not saying that we all should be using 'make', but I 
wanted the ability to use it built into SSB. 
 
2 - When using the command line, if SSB fails with an error, 
the _BAS file is deleted.  This was need for the SSBGO, the 
utility mentioned below. 
 
3 - Better error reporting. 
 
The new utility is SSBGO.  SSBGO automates running SSB and 
Qliberator.  SSBGO was designed to be used with MicroEmacs. 
From MicroEmacs, I would saved a file, then use the 
execute-program command.  I would enter: 
 
   ssbgo flp1_file & 
 
(the & is to EXEC it and not EXEC_W it) 
 
SSBGO will then run SSB on 'flp1_file_ssb' (_ssb is the 
default extension).  Once it has file_bas, it then load 
file_bas in to SuperBasic, SAVEs the file to a temporary 
file, checks it for the keyword MISTake and exists if one 
exists.  If not, it executes the LIBERATE command, which is 
the command line interface to Qliberator.  Qliberator then 
fires up, compiles the program, and finishes.  So I have 
gone from having SSB source to a compiled program in just a 
few minutes, with only one command.  SSBGO can be use with 
other editors, you just need to CTRL-C out of the editor, 
EXEC SSBGO, and let it run. 
 
SSB261_ZIP is available on my web page. 
 
FILECONFIG 
 
Another program that I have been working on is FileConfig. 
The short explanation is that FileConfig is an automated 
version of BasConfig, by O. Fink (and modified by Norman 
Dunbar and Dilwyn Jones).  With BasConfig, you can't edit an 
existing Config Block, only create a new one.  With 
FileConfig, you store the definition of a Config Block in a 
text file.  If you need to edit a Config Block, edit the 
definition file, run FileConfig, and you have a new Config 
Block. 
 
FileConfig is designed for programmers and has very minimal 
error checking.  It does not verify the data put into the 
Config Blocks, which means that you could attempt to put the 
character A into a Byte item.  If you attempt this, the 
results will be "undefined", which is a nice way of saying, 
"you are on your own." 
 
FileConfig is available on my web site. 
 
 
MICROEMACS MACROS 
 
Thierry Godefroy has ported over the latest version of 
MicroEmacs 4.00 to the QL.  He has added Pointer Environment 
support for MicroEmacs, including menu items for all the 
commands.  This has made MicroEmacs much more appealing and 
much easier to use.  Just before this port, I had been 
playing with a little with configuring MicroEmacs and 
tinkering with macros.  I have tried using macros in the 
past, but I had not quite figured out how to use them.  With 
all the commands now available from pulldown menus, it is 
very easy to execute a macro from a file. 
 
Now that I know how to execute macros from a file, the next 
thing was to figure out what would be usefull to write as a 
macro.   
 
I've looked over some macros that I found on the MicroEmacs 
web page.  These macros helped create HTML files.  These 
macros would query the user for any information it needed 
when creating HTML constructs.  This means that an 
application could be written in macros, querying the user 
for certain data, and generate an end product.  Given the 
maths funcions built into MicroEmacs, one could write short 
little calculating programs, just like we did, years ago, on 
the ZX81. 
 
The macro I wrote to show how this works is a simple 
mail-merge like application.  The user creates a document, 
with fields marked out where they want information to go. 
Here is a short example: 
 
   @fname@ @lname@ 
   @street@ 
   @city@ 
 
   Hello @fname@, 
 
   How are you today?  How is your wife @wife@? 
 
   Signed, 
 
Here there are fields marked for first name, last name, 
street, city, and wifes' name.  Since this is a text editor, 
I used an at sign (@) at the beginning and end of each field 
to make it distinct from the rest of the test. 
 
The macro will first query the user for the information and 
then it will go through the text file, replacing the marked 
fields with the user provided data.  A macro like this can 
be usefull if you are writing a Christmas letter that you to 
make a little more personal, but still save time in writing. 
The macro is faster than editing the document yourself, or 
even running the same search-and-replace queries. 
 
The command  
 
    set %variable @"String" 
 
tells MicroEmacs  to  query  the  user  for  input,  showing 
"String"  on the  command line,  and store  the data  in the 
variable  %variable.  Without  the at  sign (@),  the string 
"String" would be stored in %variable. 
 
   ;This macro will query the user for some items 
   ; and then replace them with marked fields in the 
   ; text file. 
 
   goto-line 1 
   set %fname @"First Name : " 
   set %lname @"Last Name : " 
   set %addr @"Street Address : " 
   set %city @"City : " 
   set %wife @"Wife's First Name : " 
 
   write-message "Replacing Text ..." 
 
   replace-string "@fname@" %fname 
    ; Need to goto to the beginning of the 
    ; file because the search starts from 
    ; where the cursor is to the end of file. 
   beginning-of-file 
   replace-string "@lname@" %lname 
   beginning-of-file 
   replace-string "@street@" %addr 
   beginning-of-file 
   replace-string "@city@" %city 
   beginning-of-file 
   replace-string "@wife@" %wife 
   beginning-of-file 
 
   write-message "Done ..." 
 
 
When Thierry introduced spell checking with MicroEmacs 4.00, 
it only allowed spell checking of a single word, already 
marked.  I thought it would be a good idea to write a macro 
that would talk through a file and spell check them all. 
Thierry has since mentioned that he is looking to expand the 
spell checking capability to be more user friendly.  But, 
still the idea of writing a macro to walk through a file, 
word by word, seemed like a good challenge.  Below is the 
macro. 
 
store-procedure get-word 
   set $kill "" 
   !force next-word 
   set-mark 
   !force end-of-word 
   copy-region 
   set %word $kill 
;   write-message &cat "The Word is : " %word 
!endm 
 
end-of-file 
set %end $curline 
beginning-of-file 
 
!while &less $curline %end 
   get-word 
   write-message &cat "Word is : " %word 
!endwhile 
 
 
PROGRAM INTERNATIONALIZATION 
 
About 10 years ago I attended some vendor training on how to 
program and extend their particular office automation suite. 
One of the things that I took away from the training was how 
they designed their system to adapt to many languages. 
Recently the method came to mind.  As I was thinking about 
the possibilities of using this method in my own programs, I 
pondered over it's limitations and what other methods might 
be used. 
 
After some thought I have considered three different 
approaches to allowing a program to support multiple 
languages: 
 
     - Text File 
         One file per language 
 
     - All languages stored in executable 
      
     - One executable per langauge 
 
Before covering the different approaches, the main thing 
that each approach hinges on is the storage of all output 
messages in an array.  Instead of having a line like this: 
 
     PRINT #3,"File Not Found" 
 
You would have something like this: 
 
     PRINT #3,messay_array$(35) 
Since the output messages are not hard coded, the array can 
be changed to suit the lanuage.  For every possible output 
message you would have to put an entry in the array. 
Granted this will make reading and maintaining the source 
code more difficult, but it does make supporting different 
languages so easy. 
 
Now the difference in each appraoch is how to store the 
different arrays for each language.  Each method has some 
pluses and minuses and each have to taken into consideration 
for each programs needs. 
 
The first method mentioned, Text File, is the method that I 
learned in the training class.  The developers created a 
text file, in each language, of all of the possible 
messages.  Each file would be given a different name.  The 
program would expect a certain file name.  The current 
language would be renamed to that file name.  When the 
program was executed, the program would read the file and 
load messages from the file into the array. 
 
The problem with this method is the overhead of reading in 
the file.  If you have a program that may be exected many 
times in a single session, the speed of the program will 
suffer from reading in the language file each time.  If you 
are writing an application that once executed will run for a 
while, such as a word processor or spreadsheet. 
 
One advantage of having that messages in a text file is that 
new languages can be added to the program, with no change to 
the executable. 
 
One way of getting around the overhead of reading in a text 
file is to store all of the messages of all the langauges in 
the executable and have a command line option or environment 
variable determine which langauge is choosen.  With 
SuperBasic the different messages would be stored in DATA 
statements.  When the determination of which langauge is 
made, the program would read select which DATA statements to 
read into the array. 
 
There are two disadvantages to this approach.  First, the 
space needed for each langauges may add signficantly to the 
size of the executable.  Secondly, if a new language is 
needed to be added, then the program has to be recompiled. 
 
Another approach is to create the program the same way as 
the previous approach, but use conditional compilation to 
create an executable for each language.  This will cut down 
on the amount of space needed for the message data, but it 
does mean that a different executable would have to be 
distributed for each langauge.  If you included each 
language executable on the distribution media, then this 
approach may work. 
 
Given the wide distribution of QL users and the many 
different languages, having a localized version of an 
application may make an application more accepted in the 
community.  The difficult part will be in translating the 
messages into different languages.  If the Text File 
approach is used, then users could translate the messages 
and distribute the new language file to other users.  I 
believe something like this has been done with various 
dictionary files. 
 

    Source: geocities.com/svenqhj