"Simulator sickness", the variant of motion sickness induced by immersive virtual environments, can be some of the most intense motion sickness around. Though I can't compare it with space sickness (yet :-), I can say that I almost never get motion sick except when using an immersive display, especially a head-mounted one.
Last I read, the causes of motion sickness weren't well-understood, but the theory generally was that differences in visual and proprioceptic feedback is what induces nausea. "Proprioceptic" feedback is the knowledge (a sense, like touch or pressure, if you will) of where your body is in space and how the parts are positioned relative to each other. Proprioception is what lets you close your eyes and touch your left and right fingertips together.
How does this apply to games? The one-word answer is lag, the bane of all researcher working in virtual reality (in augmented reality it is even harder). User input occurs, and it takes a certain amount of time for that input to be processed by the computer, for the machine to determine what to do, and then to produce the appropriate output. The difference between the user's input (action) and the computer's response (reaction) is the critical lag factor in VR, games, and simulation. Cognitive experiments suggest that, so long as this lag is less than about a tenth of a second (the number varies based on task), the user feels in control. Greater than a tenth, and the user feels like their actions don't correspond to the reactions.
So, in VR, move your head. The screen needs to reflect this in under a tenth of a second. "No problem," you say, "because Quake runs at 60Hz on my machine." Yes, but frame rate is not the critical factor, but the length of time between your mouse click and the appearance of a missle on screen.
Let's run some numbers. Just to keep the math easy, lets say we'd like to maintain a 100Hz frame rate and a 0.1 second lag. So, at worst, no more than ten frames can go by between the time the user moves the mouse and the screen changes to reflect the move (e.g., a new door becomes visible on the edge.
1. Suppose the mouse is sampled at 100Hz (that's fast, but let's keep the math simple). Then worst case the user's move occurs just after a sample is taken. That's 0.01 second of lag right there (it takes that long before the mouse is sampled again).
2. Now how often can the IO controller put info on the bus? Probably no more than a 100 times a second - after all, that's the sampling rate. So add 0.01 second of lag.
3. Now let's handle the IO interrupt. Add 0.01 sec of lag for the kernel-level context switch to deal with that. We're up to 0.03, with 0.07 left to spare, and we haven't yet hit the main CPU.
4. So the CPU finally gets the click. Let's pretend our game is really well-coded and so it only takes us 10 milliseconds to decode the click, look up the corresponding action (hope the event table didn't get swapped out :-), and get the geometry for that door to put on the screen (better hope you pre-loaded it and you don't have to get it from disk now - that would blow the lag budget right there). 0.04.
5. Now we run through all the other stuff going on in the game - monsters, AI, etc. Again, let's be generous - 10 msec. We're halfway to the "lag barrier" at 0.05 sec.
6. Now process all the geometry. Visibility, any transforms you can't do on the framebuffer card, etc. 10 msec, 0.06.
7. Another 10msec dealing with the bus and shipping all that geometry to the graphics accelerator. 0.07! Phew! We made it!
8. Nope. Haven't drawn everything yet. Let's say the card does all the geometry transforms, clipping, etc. in our math-friendly 10 msec. 0.08.
9. Now rasterize all those polygons. 0.09. Uh oh.
10. Ship the frame to the monitor - 0.10 - we just made it!
11. Bzzt. Try again. Your monitor refreshes at 100Hz, and you just missed it. 0.11 and we're over.
We're not over "lag budget" by much, but that's not the point. This was a contrived example. Now add all the layers of software and such in there, speed up some parts (the bus), and slow down others (gee, that fancy new AI isn't so cool anymore). The point is, many parallel steps in the pipeline, and end-to-end lag can be dramatically higher than frame rate might indicate (more than ten times, in this case). Add network traffic and, ugh.
So how is this relevant to the question of OpenGL vs DirectX? Well, it sounds to me that the original poster gets well and truly immersed, setting up the possibility for simulator sickness if lag gets ugly. And it would seem that the lag is just under the poster's nausea threshold with OpenGL and just over it with DirectX - the user moves the mouse and the screen keeps up with OpenGL, but lags behind in DirectX. Time to spew your cookies.
This kind of stuff is a bear to manage - it is like real-time computing only you don't have the guarantees you usually have in a bona fide real-time environment. Usually, RT environments have a fixed number of tasks to deal with - you know how much time anything is going to take (it's still a bitch dealing with it, but at least you know the bounds). A game doesn't work that way. If the user's facing the wall (one or two big polygons) and suddenly looks back over her shoulder to see the glorious million-polygon Great Hall, well, you get the idea. Bounds shmounds. You've got no idea how long that will take to process.