Y2K, 2038 Unix timeout, IBM5100

.... would make it a lot easier to do what could be done by brute force.....if you were going to try to dis-assemble lots of large, complex mainframe-based programs it would take many more people

I think that is the answer. But the idea is to fix the legacy applications due to something to do with the 2038 Unix timeout error. Maybe these legacy IBM applications are running on Unix machines with the aid of an emulator and when "they" fix the 2038 timeout error it causes the IBM mainframe applications running on the Unix machines to malfunction.
 
I think that is the answer. But the idea is to fix the legacy applications due to something to do with the 2038 Unix timeout error. Maybe these legacy IBM applications are running on Unix machines with the aid of an emulator and when "they" fix the 2038 timeout error it causes the IBM mainframe applications running on the Unix machines to malfunction.


Are you saying that this 2038 bug will arise in our worldline too and the John in our worldline goes back to get an IBM 5100 again?
 
Titor Wtote:
Your deductions are quite accurate. Its possible to go forward to "your" 2036 and it would look nothing like mine."


So here his worldline is different and I say his worldline had Y2K and so does the Y2K38 bug existed.

Your quote of his Worldline and ours being exactly alike doesent come into picture relating to Y2K
 
Because of this:

Titor wrote:
Yes, the Pearl Harbor example relates to Y2K. Have you considered that I might already have accidentally screwed up your worldline?”

What else you think he meant when he wrote this?

And this:

What amazes me is why no one here wonders why Y2K didn't hit them at all?


Why no one wonders? because no one knew that he was the TT who gave the 5100 from 1975 to this worldline to fix Y2K.

Any one with a "Tweaked" IBM 5100 can do the thing without revealing himself as a TT. So there is no "secret agenda" in this. He did it because it was his secondary objective.

That secondary objective is basically to gather as much information about a worldline based on a set of observable variables when we first arrive. Your worldline met those conditions.
 
So here his worldline is different

I'm not saying his worldline is different. I am saying that regarding Y2K and Y2K38 his worldline is different.

While some other natural disaster causing a civil war becomes a major part of history making the two worldlines exactly alike.

Many think Y2K is a major life taking Disaster. They should realise that was not the case. His worldline had Y2K in the sense, the electronic data stored worldwide is scrambled up causing a lot of Companies to lose Cash and the programmers make money.

"Regardless of how the Y2K issue is viewed, modified code should be tested, and unmodified parts of the system should be retested to ensure that each "fixed" system is Y2K immune. As already stated, these testing expenses can be pricey. One reason is that few testing tools are smart enough to automatically know how to minimize testing costs for modified code. However, you would think a "smart" tool could determine exactly what code needed to be retested. It would be great if such an automated tool existed to distinguish that kind of code in a "optimized" mode, i.e., determine the least amount of code that needed to be retested to demonstrate that a code conversion was correct."
Unfortunately, no such tool exists. This suggests that there is a serious need for tools that seamlessly integrate with Y2K conversion tools and that test Y2K conversions. If such tools existed, the total global cost of the Y2K problem could be reduced while still providing sufficient confidence that Y2K conversions were correct. This could add up to astronomical savings, as the world-wide cost for fixes alone is $600 billion, not to mention legal liability costs that could exceed $1 trillion [2, 3].

http://www.stsc.hill.af.mil/crosstalk/1998/01/y2kfixes.asp

also try this google search link:

http://www.google.co.in/search?biw=780&hl=en&q=legacy+code%2BY2K&btnG=Google+Search&meta=
 
What is LEGACY CODE?

Here is an explanation:

Getting into computer play late in the game helped Amherst stay ahead of the problem.

"We really didn't get in until the mid-1980s," Galkiewicz said. The town has about 175 PCs, all purchased in the last three or four years.

None of those should present a problem, Galkiewicz said.

Problems arise with older systems written in COBOL computer code in the 1960s and 1970s, he said.

"That's the code that is the big problem. It's called legacy code," he said.


http://www.bizjournals.com/buffalo/stories/1998/06/01/focus3.html
 
Thats with the "Legacy Code". I hope you are clear with that part of the Story.

You also pointed out another code. This code is used by Titor and his Grandfather to make sure that Titor travels to the correct Worldline in the future from 1975.Titor would check for this code in 1998 and make sure he is on the correct Worldline.

This code, you can call it the "KA-BOOM Code" is IBM 5110: Codename Yellowstone.
 
MadIce,

What a surprise to see you over here. Welcome /ttiforum/images/graemlins/smile.gif

I went to the site. Funny, no information about the book other than a title and the book is out of print.

Sorry. Going to need something more definitive than that. Nice try though.
 
MadIce,

I finally figured out why your name is MADice. Now I get it. But what's the ice for? /ttiforum/images/graemlins/smile.gif
 
Maybe you can try to write the Australian Atomic Energy Commission, Research Establishment. They seem to be the publisher.

I am sure they'll photocopy it for you.
 
That is the most common response I hear. But one can only run an executable written for the S/360 instruction set on an emulator. If you need to fix a program you must convert machine code to a language first. Why. An executable, or binary, file is just a bunch of 1's and 0's. So for code that looks like: 0010 1011 1110 0101, it is impossible to know if that is an instruction, data, or a pointer (address). You need to convert the machine code to a language to be able to then modify the code. Take a look at these statements by Titor:

Titor: "Titor: “The 5100 has the ability to easily translate between the old IBM code, APL, BASIC and (with a few tweaks in 1975) UNIX.”

Titor: “We need the system (IBM 5100) to “debug” various lagacy (sic) computer programs in 2036.”
If you don't have a disassembler one can always use a debugger. A debugger (or monitor as it was called in those days) can be either a hardware monitor (the "blinking lights") or a software debugger. In fact a software debugger is a kind of an interactive disassembler that works in real time with the program running. Very handy. There are of course debuggers for high level languages too.

Here is one:

The Tachyon 390 Cross Assembler permits you to assemble System/360, System/370, 370-XA, ESA/370 and ESA/390 assembler language programs on workstation machines. The language supported is highly compatible with IBM’s High Level Assembler release 5. The Tachyon 390 Cross Assembler assembles most programs that can be correctly assembled by the IBM assemblers. Source, macro and listing files can be read or written in ASCII or EBCDIC. The object files produced may be transferred to the mainframe system, linked into a load module, and executed. The Tachyon 390 Cross Assembler can be integrated with popular Integrated Development Environments and editors. The assembler also provides enhanced debugging information for Cole Software’s XDC debugger.
And did you know you can acually buy these things? From the shelf. No problem.

As you can see you don't need an IBM 5100 for that. Any work station will do. At best I can imagine that the 5100 would be needed as a machine to communicate with a mainfrane. But that would be a poor choise.

For an example the character set of the 5100 was not a standard character set. That was changed in the IBM 5110 which was released in 1978. APL is using a lot of single characters that are actually some of it's instructions (or tokens - depending how you want to name it). You can imagine that this is a problem when these do not correspond to the ones on a mainframe. One should be very carefull with this.

The guy from the computer museum (I think you have a link of his site somewhere) told me that there is little change that an IBM 5100 was able to run System/360 code because it was actually missing the required supporting hardware.

We also found out that the built-in time sharing code was broken. So, it isn't simple to use the 5100 as a terminal.

If I am not mistaken the IBM 5100 was using a subset of the instruction set. It was intended to speedup development time. It saved the porting of the APL/BASIC code and thus reduced development costs. Also, the IBM 5100 had limited memory when compared to the mainframes. That would give severe problems when running production code.

Then there is the problem of the tape. The QIC 300 tapes are standard, but the IBM 5100 is missing a real-time clock. That means it cannot properly write time and date stamps to these tapes. Another problem is that the headers on the tape written by a 5100 were not binary compatible with other machines that used one of the data types. I think it was number 6. Not sure. The latter was fixed in the 5110.

We also have the problem of the disk drives. It didn't support any. That too was fixed in the 5110.

So... That simple fix Titor talked about is actually a series of fixes.

I wonder why they didn't use a standard work station or some of the early PC's specially equipped for the job. The IBM 5100 wasn't really portable. It used mains and was... 50 lbs... IIRC? Here is what I would use.

The IBM Information Systems Division that produced mainfraim computers, introduced the PC/XT 370 and 3270 PC computers in October 1983. These products were designed to be a link to IBM mainframe computers. They had special built-in hardware to emulate System/370 mainframes and were able to act as a 3277 display terminal. The PC/XT 370 used a 8088 and also had a 8087 for floating point instructions. In addition it used 2 Motorola MC68000 processors for emulation of the System/370 instruction set. It used 512KB RAM and an extra 256KB which brought the total to 768KB extended memory. It had a built-in 360KB floppy drive and 10MB (or optionally 20MB) hard drive. There was a software package available called VM/PC to interface with a mainframe.

The 3270 PC combined a standard PC with an IBM 3270 terminal. The base computer had 256KB, epandable to 640KB. The 122-key keyboard featured all keys of a standard PC and a 3270 terminal. The software allowed to access up to 4 programs concurrently on the host computer, two "notebook" transfer areas and a PC program. Seven windows could be defined to monitor the software being accessed.

Sounds handy to me. Don't you think so?
 
MadIce,

As usual, you miss the point. Maybe if you re-read the posts you will see how your logic is flawed. I don't have time to spoon-feed you information. Try reading and thinking AT THE SAME TIME!
 
Now that we have established that the IBM 5100 did have issues with its communication, that it didn't have enough memory to run production code, that there is an off-the-shelf interactive debugger, that there is a disassembler, that there is even an off-the-shelf cross-assembler, and that there are better alternatives for communicating with mainframes than the IBM 5100 (two specialized machines!), the next step is to find a suitable APL version for one of the machines I mentioned earlier.

If you look into the family tree of IBM APL languages then you'll see that the IBM 5100 used a dialect called "APL 5100 A". It has been derived from "APLSV PRPQ Version 1". That of course limits which mainframe APL software can be run to the System/360. APL versions for System/370 are using somewhat different microcode and have some added features and fixes. The IBM 5100 is certainly not upward and binary compatible to that system. If we want a better alternative then all we have to do is find an APL dialect in the tree older than PRPQ Version 1 to run on a machine that is more versatile than the IBM 5100. In our case the IBM PC/XT 370 and/or IBM 3270 PC.

As you can imagine, this no real problem, because later versions are downward compatible. In 1982 ALP/PC was developed. To allow it to be used with less memory the elastic workspace concept of System/1 was adopted. The language itself was essentially that of the APLSV internal system with some features found in APL2 IUP. In addition to several enhancements to improve the migration to and communication between various mainframe APL systems, IBM made a deliberate effort to bring as much as the underlying machine under control of the APL programmer. A new command allowed the reading and writing of any memory loaction and to allow machine code subroutines to be executed. Needless to say it interfaced perfectly to the PC. It hit the market in 1983.

The sweet part of APL/PC is of course that it also runs on a standard PC or even a laptop. That would be handy, don't you think so? Maybe a more modern version like APL/PS2 would run mainframe code a bit smoother. There was even a 32-bit version of that released in 1989 which allowed 15MB RAM access.

BTW: I forgot to mention that the IBM PC/XT 370 with VM/PC allows the hard disk to be used as virtual memory. That feature was implemented to run production code using more memory than the available RAM.
 
Just thinking about flawed logic: The fact that you can't buy the book now, shouldn't be a problem for a time traveller. The book is not interesting, but the fact that there is a disassembler is.
 
Titor: “We need the system (IBM 5100) to “debug” various lagacy (sic) computer programs in 2036.”

What does all that mean?

First, old IBM code must mean applications written for the IBM S/360 that run a large part of the infrastructure. It could possibly mean applications written for the S/370 as well. The S/370 was introduced in 1970 and it is possible the 5100 could work with S/370 instructions as well. The S/370 instruction set did not change again until 1982 when addresses were expanded from 24 bits to 31 bits. So we are talking about potentially all applications written for IBM mainframes between 1960 and 1982 + a few years.
No, it could not. If we assume the machine runs System/360 code - which will be very unlikely because it is missing the supporting hardware - and that all hardware developed later is downward compatible then that does not mean that the older hardware can run code developed for the new hardware. There is a difference between downward compatibility and upward compatibility. An "expert" should know that.
 
Second, the source code must not be available. Otherwise one could simply edit the source code and recompile the application.
You only recompile languages which use a compiler. Languages written in assembly require either a disassembler or a software debugger to retrieve the code and re-assemble them. Reading assembly is not difficult using a debugger or disassembler. Languages like BASIC and APL are interpreted languages. They are translated in run-time. That's why they are a bit slower than compiled languages or assembly. You can simply edit BASIC and APL software and run that without an extra compile or assemble step.
 
The problem with working with executable files is that it contains machine code, like 1001 0101 1000 0000 and one does not know if a “chunk” of data is an instruction, data, or a pointer to data. The computer CPU knows how to handle the instructions but it would be very difficult to reconstruct assembly language or some higher level language from executable code.
Like I said there is no problem at all when using a disassembler or debugger. The fact that you think it is difficult is probably because you are not trained to do so. The problem is not those 0s and 1s. You don't see those often. Assemblers, debuggers and disassemblers allow you to switch to any radix (binary, octal, decimal, hexadecimal) you like. Nor is it a problem that you have a hard time finding code or data segments. Again debuggers and disassemblers can help you with those. The real problem is that you need to know a lot about the underlying operating system.
 
Back
Top