Goro_Lives
Temporal Novice
John Titor is a fraud - here\'s a brief reason why
The comment in question:
---
As you are probably aware, UNIX will have a timeout error in 2038 and many of the mainframe systems that ran a large part of the infrastructure were based on very old IBM computer code. The 5100 has the ability to easily translate between the old IBM code, APL, BASIC and (with a few tweaks in 1975) UNIX. This may seem insignificant but the fact that the 5100 is portable means I can easily take it back to 2036. I do expect they will create some sort of emulation system to use in multiple locations.
---
Software developers are well aware of this problem. It's referred to as the 2038 UNIX time problem and there are 2 viable solutions for it.
The problem stems from the UNIX operating system representing time as a 32-bit value which has a rather limited range of time values. The last date UNIX can handle is Jan 19, 2038. The first solution is to advance the epoch date (January 1, 1970) to extend the wrap-around date by another several decades. The second solution is upgrade to 64-bit processors. Both of these solutions require recompilation and potentially small code tweaks which admittedly can be difficult to locate, however hold your judgement for a moment. (A 3rd solution is to use an unsigned 32-bit value versus a signed value. I don't consider this reasonably viable.)
The threat is the danger that old 32-bit hardware and software will still be in existence in 2038, and therefore could pose a serious problem should a nuclear reactor contain such systems. The reality is that threat is nonsense. We've had the same problem with Y2K and there were no issues because upgrades are common and eventual. It's ALWAYS cheaper and easier to replace legacy hardware and software than it is to repair them. This is mainly due to decreased hardware costs, added feature sets, and increased productivity from new tools.
We currently have a lead time of 32 years and we're already well into the switch to 64-bit systems. Compare to the Y2K problem where most corporations began preparations as late as 1 to 3 years before 2000. EVERY hardware and software system fails at some point. It's simply not possible to run a nuclear power plant on hardware so old there are no replacement parts and no knowledgeable developers. When components fail, they're generally replaced with newer/modern counterparts, therefore those legacy 32-bit systems will be replaced by 64-bit systems at some point in time. Statistics is on our side with regard to mean time to failure.
Also, consider the reason we're undergoing the transition to 64-bit systems. 32-bit applications cannot address more than 4 GB of memory. This is a severe limitation for many large scale applications, so there is a very strong incentive to use 64-bit operating systems which scale up to a potential 17,179,869,184 GB. There was no such driving factor immediately prior to Y2K and yet we sailed through Y2K with no problems.
Another thing to consider is that most systems are data driven. They store and retrieve data from an external storage system, such as an Oracle database. Those databases do not store time as a simple 32-bit value. They will not suffer the 2038 problem. Even if they did, upgrading a database is a simple matter. It occurs frequently. Ask a database administrator.
Lastly, I predict within 7 years, 95% of the population will be running a 64-bit operating system on their personal desktops. 64-bit operating systems will be as common as salsa at a taco bar.
Titor was toying with you by casually referring to the 2038 problem. By pretending to not fully understand the problem, he fostered an air of plausibility. The 2038 problem will not exist when 2038 arrives. This is fear mongering constructed from partial facts and false implications.
http://en.wikipedia.org/wiki/Unix_time#32-bit_overflow
http://en.wikipedia.org/wiki/Year_2038_problem
The comment in question:
---
As you are probably aware, UNIX will have a timeout error in 2038 and many of the mainframe systems that ran a large part of the infrastructure were based on very old IBM computer code. The 5100 has the ability to easily translate between the old IBM code, APL, BASIC and (with a few tweaks in 1975) UNIX. This may seem insignificant but the fact that the 5100 is portable means I can easily take it back to 2036. I do expect they will create some sort of emulation system to use in multiple locations.
---
Software developers are well aware of this problem. It's referred to as the 2038 UNIX time problem and there are 2 viable solutions for it.
The problem stems from the UNIX operating system representing time as a 32-bit value which has a rather limited range of time values. The last date UNIX can handle is Jan 19, 2038. The first solution is to advance the epoch date (January 1, 1970) to extend the wrap-around date by another several decades. The second solution is upgrade to 64-bit processors. Both of these solutions require recompilation and potentially small code tweaks which admittedly can be difficult to locate, however hold your judgement for a moment. (A 3rd solution is to use an unsigned 32-bit value versus a signed value. I don't consider this reasonably viable.)
The threat is the danger that old 32-bit hardware and software will still be in existence in 2038, and therefore could pose a serious problem should a nuclear reactor contain such systems. The reality is that threat is nonsense. We've had the same problem with Y2K and there were no issues because upgrades are common and eventual. It's ALWAYS cheaper and easier to replace legacy hardware and software than it is to repair them. This is mainly due to decreased hardware costs, added feature sets, and increased productivity from new tools.
We currently have a lead time of 32 years and we're already well into the switch to 64-bit systems. Compare to the Y2K problem where most corporations began preparations as late as 1 to 3 years before 2000. EVERY hardware and software system fails at some point. It's simply not possible to run a nuclear power plant on hardware so old there are no replacement parts and no knowledgeable developers. When components fail, they're generally replaced with newer/modern counterparts, therefore those legacy 32-bit systems will be replaced by 64-bit systems at some point in time. Statistics is on our side with regard to mean time to failure.
Also, consider the reason we're undergoing the transition to 64-bit systems. 32-bit applications cannot address more than 4 GB of memory. This is a severe limitation for many large scale applications, so there is a very strong incentive to use 64-bit operating systems which scale up to a potential 17,179,869,184 GB. There was no such driving factor immediately prior to Y2K and yet we sailed through Y2K with no problems.
Another thing to consider is that most systems are data driven. They store and retrieve data from an external storage system, such as an Oracle database. Those databases do not store time as a simple 32-bit value. They will not suffer the 2038 problem. Even if they did, upgrading a database is a simple matter. It occurs frequently. Ask a database administrator.
Lastly, I predict within 7 years, 95% of the population will be running a 64-bit operating system on their personal desktops. 64-bit operating systems will be as common as salsa at a taco bar.
Titor was toying with you by casually referring to the 2038 problem. By pretending to not fully understand the problem, he fostered an air of plausibility. The 2038 problem will not exist when 2038 arrives. This is fear mongering constructed from partial facts and false implications.
http://en.wikipedia.org/wiki/Unix_time#32-bit_overflow
http://en.wikipedia.org/wiki/Year_2038_problem