The History of Dark Energy Goes Way, Way Back

TheCigSmokingMan

Rift Surfer
The History of Dark Energy Goes Way, Way Back
http://space.com/scienceastronomy/061116_darkenergy_infantuniverse.html
By Sara Goudarzi
Staff Writer
posted: 16 November 2006
01:10 pm ET

Scientists now have evidence that dark energy has been around for most of the universe's history.

Using NASA’s Hubble Space Telescope, researchers measured the expansion of the universe 9 billion years ago based on 23 of the most distant supernovae ever detected.

As theoretically expected, they found that the mysterious antigravity force, apparently pushing galaxies outward at an accelerating pace, was acting on the ancient universe much like the present.

--------------------

Dark Energy? Thats where RMT comes from?... /ttiforum/images/graemlins/smile.gif

TheCigMan
 
Now that we create antimatter, was it in existence before our creation of it, or in a more "perfect created" state, prior to?

Is this question seperate from what you are speaking on, or am I bringing up a seperate topic?

Antimatter = imperfect darkmatter?

*Someone with the knowledge could have a great thesis on their hands here.

Truly my last post, as I saw this afterward and had to reply.

Goodnight all.
 
I'd like to think that dark matter is masses of dark subatomic particles. I think there is a "breaking point" for photons when they can no longer remain at light speed due to the conal shape of photons through space over time.

Like if you threw a lightbulb into infinite space- you'd watch it drift away getting smaller and smaller until you couldn't percieve it anymore. Light works in the opposite direction- outward like a cone. And at some point, that photon will get so thin and stretched out that it simply falls apart into WIMPS, which attract each other (because they can only attract or repel and if they repelled, the universe couldn't be cohesive). These become clouds and grow and collapse and grow until they become hydrogen atoms and the rest we all know.

>>Now that we create antimatter, was it in existence before our creation of it, or in a more "perfect created" state, prior to?<<

We exist in a closed universe, so conservation is intact and E=MC2. That means everything in the universe is interrelated to everything else. As such, the amount of energy required to create a quotent of antimatter is greater than the antimatter itself and that is where it came from.
 
I really like and enjoy your visual analogy.

My question based on an analogy along the same lines is thus;

If within computer code, say; C+ is compiled and brought down to base binary;

What number system was E=MC2 brought down to and based on? (Numerical equation)

I do not disagree / I agree; but the base compiler has a base in either analogies.

IE; Binary versus current Base number systems.

We can only build upon our tools. This was a comparison, not a relative cohesion.

*see my other post for reference. Do not get me wrong, I do applaud all the achievements, they are much more vast than I. :oops:

I just have a personal thesis on my own developments within numerical representations and the fundatmentals on which they are drawn. IE; A binary code is created, and all code is created therin upon that, without regard for rebuilding of the foundation once a greater understanding is achieved. This can be applied to many areas methinks.

Funny how when trying to get an achievement beyond the GHz of the processing of data, the area of thought was placed to increase the width of the pipeline versus the increased rate of elevated Hz.
IE; FAT32 bottleneck, / NTFS increased to dual cores.
Instead of looking back towards the binary On/Off switches themselves.

Analogies.

I apologize, I had said I would not post, and I will not, however your reply intrigued me greatly!

God bless.

P.S;

CigMan

Your quote;

As theoretically expected, they found that the mysterious antigravity force, apparently pushing galaxies outward at an accelerating pace, was acting on the ancient universe much like the present.

Does this statement not deny the theory of observed "Cannibalistic Galaxies" whereas the black holes attract in upon themselves adjoining bodies to cojoin as an end result?
IE; The center dictates the attraction versus the size held within?

'Nite.
 
It's the computer's job to replicate reality, that is why they must be as accurate as possible- because when computers start giving wrong answers they become worthless. From my edjumacation in high school (back in the IBM 5100 days), a computer sees pi as 3. It is then the programmer's job to make sure that 3 is always 3.14etc so the answer remains true, this is why technology must always push the envelope to give us more and more accurate time keeping- because pi is an exact, endless number.

Pi is a fascinating number. Imagine a sphere floating in space- your job is to measure the circumference. How do you do it? You could put a tape measure all around the sphere and get a good reading but this is not practical for round things like planets or black holes.

This is where pi comes from. In theory, if you put a ruler up against the sphere, you'll get a 2D measurement. You multiply that number by pi and amazingly, you get an exact measurement- so precise that the better the 2D measurement, the more accurate the circumference will be.

Here's what makes pi so interesting- it's the same no matter how big or small the object is! For example, pi is 3.14- what would happen if we changed pi to 3.3? You couldn't- pi is pi! If you changed the diameter of a sphere, pi will change accordingly... there is no way for a sphere to have any ratio other than pi- it's simply not possible in this universe. A 3.14 sphere that we make 3.3 is not a 3.3 sphere- it's a larger 3.14 sphere!

I like to think of pi at "the relationship between 2D and 3D" and it is an exact number. With pi, you can look at any object in 2D (i.e. photographs) and deduce an accurate 3D rendering of it by using pi.

This is what makes Titor's story about the 5100 ring true- they only think in one second incremenets. To the 5100, the second is the smallest possible time, a second cannot be broken down into anything smaller. If you are a time traveller and you want an accurate time machine (which you would absolutely need in a pi-driven universe), the 5100 is the perfect tool because it only thinks in seconds.

Sure you could program a modern day computer to calculate seconds accurately and they would do a pretty good job of it too- if you told a computer to ring a bell as every second passes it would, in effect, count spinning cesium atoms and ring a bell everytime it reached a billion (because of the rate at which cesium predictably spins).

But if every second is equal to the computer counting to a billion and you're time traveling 63 years into the past, the computer is doing 1.9 billion trillion calculations to land you at point B. And since pi is an endless (unquantifiable) number, eventually the computer must miscalculate- if nothing else, it's Heisenberg's law.

But with a 5100 and one second increments, you have the capacity for measurements one billion times more accurate... it's simply a way to get from A to B by using far less computation, giving a more accurate, one second answer. This is why when Titor went back to 1975, he set his time machine to the second, not to the Planck Unit.

As an example of this, I have devised a way to autogenerate a list of infinite prime numbers. While any computer can do that (within its engineering limitations), the way I do it is with minimal computing power. As such, the list can be generated on an old 386 SX- you don't need a supercomputer even though you're crunching huge numbers. I have tested it and it works- you can simply rig it to a printer and have it spit out sequential primes endlessly on an old computer. In other words, I thought up a shortcut. In that regard, Titor's 5100 is a shortcut to travel time without having to dot all the I's and cross all the T's.

I am so glad to be saying things like this without Rainman all over me.
 
Hello there, this is the previous poster above with a new registered name.
For personal reasons, I did not think it appropriate to keep that username for posting purposes, and in doing so; I keep my word of not posting under that pseudonym.

My inclination to address the computer foundation of binary; stems from a personal thought process that dictates we have moved forward enough with computer language to recreate the processor and in doing so; the "Binary" itself. Perhaps a new "hybrid" binary.

This is my thought; 0 = "Off"; 1 = "On". This two part binary foundation has limitations the further we move with it.

A new processing ideology in my mind would lift this to another level.
IE; 0 = "Off"; 1= "On"; 2 = "Floating integer switch defined by logical algorithms"

This is in my theory without getting too involved at the moment, a way to make more logical artificial intelligence.

The key being held within the algorithms logical sequence in a way the processor and binary further to the application can make "decisions" of the "3rd determining switch"

*Visual Analogy* IE; taking the base theory of 2D code, and bringing up to a true 3D.

The visual analogy being same as a definate line has a beginning and an end, while a triangle has a continuing pattern to rotate from beginning without end within it's geometric design.

So with my theory would a floating switch have the possibility to revisit it's floating decision to dictate the "on" - "off" status or state as the algorithm of the logical program grows/learns from input based on itself and the end user.

I've had this thought for quite awhile, and have applied the thought further, but without the tools to test/create, it is just a thesis for now.

I really enjoyed your analogy on "Pi" it makes quite a bit of sense, it is a great accomplishment and in the way it is applied. My stumbling block when considering this, is that the measurement of the circumference of a sphere, while accurate, is not the only mathematical equation to keep in mind while registering other qualities about the "shape", variables can be quite numerous as I'm sure we're all aware.

Cheers,

~D.
 
>>My inclination to address the computer foundation of binary; stems from a personal thought process that dictates we have moved forward enough with computer language to recreate the processor and in doing so; the "Binary" itself. Perhaps a new "hybrid" binary. This is my thought; 0 = "Off"; 1 = "On". This two part binary foundation has limitations the further we move with it.

By what underlying logic- binary is still binary... our computers today run on the same 0's and 1's they did in 1947 and we have the same problems we've always had with processing power because of it.

>>A new processing ideology in my mind would lift this to another level.
IE; 0 = "Off"; 1= "On"; 2 = "Floating integer switch defined by logical algorithms"

How about:
1- On
2- Off
3- Sometimes on and sometimes off.

Now imagine a triangle with three "nodes" at the corners. Each node is one "amp" of power, the triangle is three amps. If you feed that triangle four amps of energy, what will happen? Two of the nodes will be positive, one negative and there will be a "round robin" of one floating amp, going node from node turning them from + to - one at a time. That fourth "amp" will be in effect "trapped within the triangle". That fourth amp is your trinary code. But this is Creedo talk.

This also sounds like a P vs NP problem to me.

>>This is in my theory without getting too involved at the moment, a way to make more logical artificial intelligence.

John Titor mentioned Ginger (IT). "Ginger" refers to "the Segway Scooter" and the IT means "Intelligent Technology" and that's what Ginger is.

The Segway is a motor scooter that stabilizes itself. It does this through electricity which controls magnets that keep the "driver's space" straight and level. The magnets reach out into the environment and get feedback from solid objects and compensate automatically. The effect is driving on a bumpy road and not feeling the bumps.

In effect, the Segway scooter is thinking about its environment. I believe Titor's reference to Ginger IT means it's the first step towards Artificial Intelligence- imagine a few generations down the road- a Segway with its own processor and GPS- it's basically a living thing at that point. Either that or a time machine. Or both.

Machines won't have the capacity to become intelligent until they can sense their environment and be able to interact with it. And whenever that happens, the first thing that machine will sense itself, then become self-aware. But this is Russ Manning talk.

>>keep in mind while registering other qualities about the "shape", variables can be quite numerous as I'm sure we're all aware.<<

Yes, but I was talking strictly in a 2D vs 3D relationship in a Hodge Conjecture kinda way.

Pleasure to hear from you-
 
I see your points.

The one thing I'd like to clarify is that when I made the analogy of a line with a definate end and start versus the triangle theory, that was just a visual analogy, not really how the energy would be processed.
For that it is still in a line theory, 0,1,2

The basic binary would function just as it does with the primary 0,1, and the 2 would be ignored until a logical algorithm is established, and depending on where sequences lay, they would revisit themselves to alter the patterns; the patterns denoting a switch in function (on/off) would be flagged by the third switch in a state demeaning perhaps an artificial "memory" to return to this choice and alter the pattern for the code; aka "artificial intelligence" on a wider scale.
The compiler would have less to do, as it is relying on a "smarter" functioning processor now dealing with algorithmic patterns and less "on/off" in rapid successions. Same principle applies, however with an "intelligent" modifier.

The ground work up from the compiler would have to be reworked all the way as well, with the GUI established or command line coding having much more imaginative and creative ability with the machine as well creating alot of the "unknowns" or "variables".

That's a vague and wordy way to establish the foundation of the idea, but there's alot more to it obviously.

P.S.

Machines won't have the capacity to become intelligent until they can sense their environment and be able to interact with it. And whenever that happens, the first thing that machine will sense itself, then become self-aware. But this is Russ Manning talk.

I was going along the lines more of, that a thought produces all result or action around us without the thought, the action cannot be produced, IE; machine code replicating more along the ways a thought or how the brain in a human works, IE; memory and electrodes not only working in forward, but working in reverse as well, to change the pattern.
 
I suggested the triangle because you can have a 2D triangle through that method, but you can't have a binary three way switch. The problem is internal dynamics: if you lined 0, 1 and 2 in a row, to get from 0 to 1 is "one space" and to get from 0 to 2 is "two spaces". Such a machine couldn't work- its internal values are not equal.

But if you had a triangle with the three points connected to wires that went to a processor, you are cheating the 2D world.

>>The basic binary would function just as it does with the primary 0,1, and the 2 would be ignored until a logical algorithm is established, and depending on where sequences lay, they would revisit themselves to alter the patterns...The compiler would have less to do, as it is relying on a "smarter" functioning processor now dealing with algorithmic patterns and less "on/off" in rapid successions..."

This is the literal definition of a brain you're talking about so you're definetly on the right page. But you can't get a third answer from a yes/no, the best you could ever hope for is a constant "maybe" and that is where you do your tunneling. The 2D triangle wired up to a 3D computer would do that. Of course we're talking theoretically, but these are the operating principles behind it, right? Getting more information out of the same thing?
 
Understood.

I was also playing with the idea where technology stands now, to perhaps instead of having a "duo core" with functionality with the binary along the basic "RAID" analogy working together, having a "Trio Core" where the third processor acts in correspondence to hold the "idea" of the 3rd floating integer along with a supplied ROM chip for algorithmic identifications and reverse clock cycles to revisit the forward cycles, if that makes sense in the way it's explained.

There is so many ways to attempt this idea now, with the processor advancements. Much akin to how Honda rethought the idea of "Turbo" with the rethinking of "Vtec" paving the way for the concept of infinite variability in valve timing. Everyone else was just thinking, of how to "push" harder, then the foundation was revisited.
 
I agree, my thesis or search in that area was more along the lines of utilizing the ability to further basic calculations and build on from there in terms of AI abilities with current consumer electronics.
Although, I do agree just as with other issues (IE; Stem cell research); there are ethics involved. (IE; enabling a machine to caculate more and the human input less).
One intriguing yet disturbing article is the work and research being done at the University of Reading by Professor Kevin Warwick; as exemplified in his book "I, cyborg".
Upon thorough analysis of his work, he has already achieved the task of successfully undergoing an operation to surgically implant a silicon chip transponder in his foream -

Project Cyborg 1.0
This experiment allowed a computer to monitor Kevin Warwick as he moved through halls and offices of the Department of Cybernetics at the University of Reading, using a unique identifying signal emitted by the implanted chip. He could operate doors, lights, heaters and other computers without lifting a finger.

and further;

Project Cyborg 2.0
On the 14th of March 2002 a one hundred electrode array was surgically implanted into the median nerve fibres of the left arm of Professor Kevin Warwick. The operation was carried out at Radcliffe Infirmary, Oxford, by a medical team headed by the neurosurgeons Amjad Shad and Peter teddy. The procedure, which took a little over two hours, involved inserting a guiding tube into a two inch incision made above the wrist, inserting the microelectrode array into this tube and firing it into the median nerve fibres below the elbow joint.
A number of experiments have been carried out using the signals detected by the array, most notably Professor Warwick was able to control an electric wheelchair and an intelligent artificial hand, developed by Dr Peter Kyberd, using this neural interface. In addition to being able to measure the nerve signals transmitted down Professor Wariwck’s left arm, the implant was also able to create artificial sensation by stimluating individual electrodes within the array. This was demonstrated with the aid of Kevin’s wife Irena and a second, less complex implantconnecting to her nervous system.

If you read further;
Link:
Professor Kevin Warwick

His wife has agreed to undergo "experiments" whereas information "downloaded" by him, may then be "uploaded" and experienced by her. (For a quick reference on this, just check down the FAQ on his official site linked above)

The moral/ethical implications of this baffle me.

So back to the original point, there are avenues available I believe, however the ones we pursue must be understood with the implications of advancements down the road.

I'd gladly like to hear your perspective on all of this.
 
Back
Top