Jump to content
Science Forums

Recommended Posts

Posted (edited)

I feel I should say this here as it is quite relevant to the topic. This is a non-issue that will not affect you in any negative way.

Here are some reasons why:

1. All of the other SAO games were PS4/V exclusive and saw very few sales outside of japan.

2. Nobody has VR right now aside from the developers and early adopters, all of which have incredibly expensive machines to power their VR

3. There was never any dub done for the SAO games that I have heard of, and none of the "glorious PC master race" will play a game in VR where they have to read subtitles.

4. Any MMO needs a large player-base, and a VRMMO needs one all the same, plus the replay-ability of content in a VR world is very minimal to boot.(assuming the project doesn't flop right out the gate)

5. I was playing counter-strike the other day with a player whose name and picture were references to SAO, nobody understood the reference even when explained to them.(never even heard of the show)

6. Correct me if I am wrong, but PC gaming is very limmited in japan outside of MOBA and Starcraft, so who is their target audience anyways?

 

If the target market(the hardcore pc gamer) doesn't recognize the franchise or actually want the product, it will flop. The people behind it will have to realize at some point that their market is overseas, not in Japan. And they'll have to work for it because MMOs are dead on arrival these days.

Edited by NotBrad
Posted (edited)

It seems as though the system created by the IBM team simply uses Oculus (or some version thereof) and has simply nailed virtual room movement from what I can only presume to be outside sources (could be using an EMG amp as well). The fact that it is SAO based is irrelevant to me, I was mostly worried about the advancements in the field, and after some more investigation into their stuff yesterday discovered it was far less advanced than I thought.

 

@CraigD If you re-read what I've said previously you'll realise that BCI is only the tip of the iceberg. The entire point of "development" is to create a system that can do something that previously wasn't possible. If any form of diving, conscious, unconscious, or otherwise were capable simply through BCI manipulation, the system would have probably existed since the mid 80s after BCI's was coined and honed in. The current steps of the program are not to write to the brain, merely to read it. You have to crawl before you can walk, then you can run, then you can fly. Systems to read brain impulses in realtime and converting those into code to a control have existed for some time. It's merely a case of extending from there. The BCI engine I have is something I've had knocking around for some time, some freeware system. Weather it works or not is a different story, though I'm almost certain I'll have to alter it. All depends if the source code is freeware as well, attempts to contact the original developers have so far failed. I did manage to find out about a few units that I can get my hands on, if I'm willing to part with nothing short of 5 - 6 grand. My statement on money being an issue still stands. That's why I've been forced to work on some more traditional 'games', purely and simply to be able to afford to develop things.

 

Crowd funding has been suggested a lot of times to me, but I'm certainly not dumb enough to do that now. What I can show publicly is very low. It's one of the downfalls of being a small team with a lack of funding. A company like IBM would be able to wack together what I've spent 3 years on in about 6 months, and the divot in their bank account would be minimal. Hence why I got abit stirred up with that recent announcement.

 

I certainly don't plan into going into excruciating detail of what I'm doing here. I've been screwed in the past by letting to much information go public about things, I don't plan to have it happen again. What can be told though, will be, as fast as possible.

 

And in response to the big bold letters, my company is not a 'tech startup' company. It's not a company formed purely for creating a dive system. It's a myriad of things. Within it is a games development studio, working on more traditional gaming releases (which is what is taking up the majority of my time). It is also a distributor and sponsor for an undisclosed cosmetics company. I have my finger in more than one pie. The problem is with the amount of work I need to do for it all, I only have 10 fingers. Push myself as I might for the last 2 - 3 years to do it all without help, it's just not happening. Manpower is required. I'm not old, but I'm not getting any younger, and life is going on around me. Something has to happen, and soon. Some lives depend on it, as strange as it may seem. It's all very complicated, and I don't plan on sharing much of it. At least not yet.

Edited by JimSolo
Posted

Can I ask what these games are, as in, are they mobile games?

 

 That's why I've been forced to work on some more traditional 'games', purely and simply to be able to afford to develop things.

 

 

 

Posted

There's two 'app' mobile games made by another member of the team, a tycoon style RTS, a standard story driven FPS (currently on hold) and another first person project that isn't a shooter. Details of the former and latter will come closer to release probably via different means to this thread, I hope to keep this mostly about DiveTech. It's not what I wanted to be doing at the moment, really. But it's fun and engaging, and will hopefully lead to greater things. And I get to help out a very special person in the process :)

Posted (edited)

Let me know when you finish your first prototype... I want to meet my waifu(s) in the VR world and wouldn't mind sharing the code with you... :Crunk:

In case you didn't catch on, the game will definitely be AO rated...

Edited by NotBrad
Posted

Whoah!

This stuff is the stuff I dreamed about as a kid. No really. I'm not exactly an expert on neuroscience but I would love to help. I have not graduated yet, so I wouldn't need to payed anything. I personally think you guys have a darn good shot at revolutionising VR in its whole....

It would be amazing if I could help you guys, but I know your all wrapped up legally, so...

If there is anything you want help brainstorming ideas for or something like that, I would love to help.

AX8

P.S this is one of the best forums I've seen in while

  • 2 weeks later...
  • 2 weeks later...
Posted (edited)

Things are working slowly forward. Pretty much all I can work on currently is the code side of things. Which is difficult when you have no reference equipment. That said, some of it is based on simple AI technology, IE that of multiple processes for each potential system, and having the program figure out which needs to be used at which time. Each of those subsystems can be disabled if they aren't required by the user. Mostly to do with frequency changes when the 'setup sequence' is done. That of taking a recording of what part of the brain elicits each response so that it can be copied each time when the user requests.

 

Example, taking a step forward. The way my brain signals for a step to be taken may be different to that of someone elses. So a databank has to be recorded of each potential way (be it neural links or intercepting a neural pathway directly), and then figuring out which combination of the commands is the best to execute the command. It makes the system quite linear, but it's the best way to easily get things rolling this early in development, imho. I'm sure there's probably programs out there that do such things but, they wouldn't be cheap. I don't even have the funds to purchase hardware, much less software.

 

Neurologists will probably jump up and down at that previous statement, but the truth of the matter is, no matter how much we know about neurology, intercepting neural pathways, or even tapping off those signals, is a completely new class of research. It's not a poor theory to think that the brain may even realize it is being intercepted and attempt to route the command through a different part of the brain. In which case you'd be playing cat and mouse with a few terabytes of information. At least that's the conclusion I've come to given my current knowledge of neurology. Research is so time consuming, especially when you're working on 50% fact and 50% theory.

Edited by JimSolo
Posted

... but the truth of the matter is, no matter how much we know about neurology, intercepting neural pathways, or even tapping off those signals, is a completely new class of research.

I don’t believe this is true.

 

Edgar Adrian measured the discharge of a single neuron (the optic nerve of a toad, via a microelectrode) in 1928, for which, along with a lot of groundbreaking theoretical work, he and Charles Sherrington were awarded the 1932 Nobel Prize for Physiology. Though it would be 30 years before the basic neurochemistry and anatomy underlying these signals was understood, neurologists have been “reading” individual and ensembles using a growing variety of techniques for nearly a century. The history section of Wikipedia article single unit recording has a pretty good timeline of major events in the history of reading individual neurons using microelectrodes.

 

The difficult in reading and “writing to” nerves in the brain and elsewhere sufficiently for a device like the fictional NerveGear is not, I think, that such research is new, but that the number of neurons that would read and written to is huge (on the order of 1011) while present day techniques for implanting them require hours to implant individual or arrays of a few thousand electrodes, and are slightly traumatic, surgical, acts. Only two classes of techniques, microelectrodes and optogenetics, are currently able to provide individual neuron resolution.

 

It's not a poor theory to think that the brain may even realize it is being intercepted and attempt to route the command through a different part of the brain. In which case you'd be playing cat and mouse with a few terabytes of information.

I’ve not seen any evidence of this in any of literature. Because neurons have the voltages measured by microelectrodes due to large numbers of Na and K ions, and because the currents in microelectrodes are very small, I can’t imagine any physical explanation suggesting that an individual neuron could detect when it is being measured by an electrode.

 

At least that's the conclusion I've come to given my current knowledge of neurology. Research is so time consuming, especially when you're working on 50% fact and 50% theory.

The main barrier I see to learning neurology is that, even when studied with nonhuman animals, it’s largely a medical discipline, so is very regulated under the laws of most countries. You really must have an MD, and the support of a respected public of private institution, or extraordinary secondary or undergraduate school support (eg: really good academic advisors) to avoid legal trouble.

 

If you have enough money – on the order of US $100,000,000 – you could employ MDs and sponsor respected labs. But only a few thousand people in the world have this amount of money – I certainly don’t, and from your frequent complaints about lacking money, I gather you don’t either, Jim.

 

I find myself in a position where all I think I can do effectively to promote the development of better BCIs is to cheerlead and advocate, and unaccustomed and often discouraging situation, as I’m a programmer, used to being able to effectively work independently. In an inversion of conventional wisdom, for me, and I think a lot of hobbyists, software’s easy, hardware hard.

Posted (edited)

I don’t believe this is true.

 

Edgar Adrian measured the discharge of a single neuron (the optic nerve of a toad, via a microelectrode) in 1928, for which, along with a lot of groundbreaking theoretical work, he and Charles Sherrington were awarded the 1932 Nobel Prize for Physiology. Though it would be 30 years before the basic neurochemistry and anatomy underlying these signals was understood, neurologists have been “reading” individual and ensembles using a growing variety of techniques for nearly a century. The history section of Wikipedia article single unit recording has a pretty good timeline of major events in the history of reading individual neurons using microelectrodes.

 

The difficult in reading and “writing to” nerves in the brain and elsewhere sufficiently for a device like the fictional NerveGear is not, I think, that such research is new, but that the number of neurons that would read and written to is huge (on the order of 1011) while present day techniques for implanting them require hours to implant individual or arrays of a few thousand electrodes, and are slightly traumatic, surgical, acts. Only two classes of techniques, microelectrodes and optogenetics, are currently able to provide individual neuron resolution.

 

Certainly didn't mean to suggest that neurology as a science is new, merely that using read information in such ways as this, is. In my research I've come across several study programs which are basically looking into the same thing. For example the Emotiv system. Clearly in those cases, reading information in even the most simplistic sense of a signal being 'on' or 'off' is possible. What I meant was, using it in such an advanced form with so many frequencies which all vary based on the situation, is new. As with every set of research in life, you have to crawl, then walk, then run, then fly. I personally believe in the state of even the most basic of brain controlled computer systems, we are barely out of the womb.

 

 

I’ve not seen any evidence of this in any of literature. Because neurons have the voltages measured by microelectrodes due to large numbers of Na and K ions, and because the currents in microelectrodes are very small, I can’t imagine any physical explanation suggesting that an individual neuron could detect when it is being measured by an electrode.

 

Purely theoretical. I know from a medical standpoint that the body has ways of 'MacGyvering' things. A relative of mine has collapsed arteries in her legs. The arterial walls degraded as a result of blood pressure irregularity caused by a thoracic abdominal aortic aneurysm. Over the course of about five years her body has 'sprouted' as it were several hundred veins from her artery just above the blockage and has slowly grown them over time. Although it will never be as effective as an artery, those veins have reverse engineered themselves to carry moderate pressure. As a result her feet get enough blood now to have delayed the need to amputate by about 15 years (at the very least). First diagnosis was that she would have lost them a few months after the aneurysm repair. That was 6 years ago, and they show no signs of tissue degradation so far.

 

I know the two fields are completely different, but in theory, it's quite possible that the brain could do something similar, no? Not 'growing' new neural pathways, merely re-directing various information paths to different parts of the brain. It's all a giant interlinked system after-all, even if certain areas deal with certain things. I do believe there have been some comatose patients that have been able to regain certain abilities (walking, talking, etc), despite the fact that an entire half of their brain is clinically dead. The brain has figured out that if it re-routes each pulse of signal and opens and closes various gates within each neural link to get the information where it needs to go, it can regain certain functions. Think of it as having water run through an entire link of pipes, but opening each floodgate to let the water through a different area. I do believe there is a flash game that does just that. Can't remember the name of it but I saw it a few weeks ago.

 

All completely theoretical and not *really* relevant to Diving in many forms, but still not something to be thrown out the window just because there hasn't been a medically recorded incident of it yet. That's the entire point of researching and developing a new system.

 

 

If you have enough money – on the order of US $100,000,000 – you could employ MDs and sponsor respected labs. But only a few thousand people in the world have this amount of money – I certainly don’t, and from your frequent complaints about lacking money, I gather you don’t either, Jim.

 

I find myself in a position where all I think I can do effectively to promote the development of better BCIs is to cheerlead and advocate, and unaccustomed and often discouraging situation, as I’m a programmer, used to being able to effectively work independently. In an inversion of conventional wisdom, for me, and I think a lot of hobbyists, software’s easy, hardware hard.

 

The exact issue I'm in. It's why I've had to divert my time into other activities (development of other software to sell, and other personal IRL money making alternatives). I'm not capable of working 9 - 5 like most people from a physical and mental standpoint, nor would it be wise for me to even if I were. I would have no time and would potentially end up wasting what skills I have acquired over the years.

 

That's the reason progress is this glacially slow. It's all well and good me writing code from dusk till dawn but with no point of reference I could well be wasting my time. With the same flip of the coin, I can't sit around doing nothing. I already know when, where, how and what I need to buy, but without a multi-billion dollar corporation under my wing, it hasn't happened yet. Legal issues are exactly the reason that even posting on this forum may well bite me in the backside, and also why the development team hasn't grown much. It's currently at 6 people, 4 of which are casual. That said, the company has a whole is exploring other paths to bring things together in the not too distant future so. Depending on how many limbs I manage to cross, things might pick up.

 

Thanks for the update, I don't mean to be annoying when asking.  I was just curios..

 

All good, just try not to double post ;) I know it's exciting to a lot of people, but don't expect me to pipe up tomorrow and go "Yo it's done."

Edited by JimSolo
Posted

 

 

 

I know it's exciting to a lot of people, but don't expect me to pipe up tomorrow and go "Yo it's done."

 

Oh, I know, and sorry for double posting.

Posted (edited)

Also for those of you who had concern and/or hope for the IBM/Watson VR experience of the same name as the Anime discussed, it would seem that it was mistranslated and wasn't even very specific in Japanese to begin with. 

http://www.roadtovr.com/sorry-internet-the-sword-art-online-vr-mmo-isnt-real/

At least according to this article, who actually referenced another article with a different source which corroborated their claim. So no SAO VRMMO any time soon.

Edited by NotBrad
Posted

As much concern as I had (which was purely selfish concern, might I add), that is disappointing. I did look into it, and it seems as though most of it is rehashed technology that has been around for a few years. Just an Oculous that senses arm movement. Much the way I think a VR version of surgeon simulator was done.

 

It is kinda disappointing that things aren't as advanced as they are, and the fact that it seems it was a marketing ploy. That said there is a small selfish part of me that's going "I haven't wasted all this for nothing." I'm sure there are other people working on this that feel the same.

Guest
This topic is now closed to further replies.
×
×
  • Create New...