AMD je danas predstavio hUMA, ili svoju HSA koncepcija+Uniform Memory Access tehnologiju na budučim HSA APU procesorima.Prvi pravi hUMA APU model bit če buduči Steamroller Kaveri APU koji bi treba doči na tržište pod kraj ove godine.
Koga zanimaju detalji tu je dolnji link, no onako ukratko Uniform Memory Access znači da CPU te GPU ili grafička jezgra djele resurse sistemske memorije.Niti jedan dosadašnji AMD APU model, ne radi na ovakav način tako da CPU i GPU djele sistemsku memoriju. Baš zato je ta hUMA itekakva prekretnica, kad se sve to uračuna pred kraj ove godine bit če vrlo intresantno i zanimljivo to sigurno.
http://www.pcper.com/reviews/Processors/AMD-Details-hUMA-HSA-Action
"The idea behind hUMA is quite simple; the CPU and GPU share memory resources, they are able to use pointers to access data that has been processed by either one or the other, and the GPU can take page faults and not rely only on page locked memory. Memory in this case is bi-directionally coherent, so coherency issues with data in caches which are later written to main memory will not cause excessive waits for either the CPU or GPU to utilize data that has been changed in cache, but not yet written to main memory.
Current APUs work by partitioning off a chunk of main memory and holding onto it for dear life. Some memory can be dynamically allocated, depending on the architecture we are dealing with. Typically upon boot the integrated graphics will partition off a section of memory and keep it for its own. The CPU cannot address that memory, and in fact it appears gone for all intents and purposes. hUMA will change this. The entire memory space will be available to both the CPU and GPU, and they end up sharing this resource just as another CPU with full coherency would with the primary CPU. This not only applies to the physical memory, but also to the virtual memory space.
The greatest advantage of hUMA is likely that of the ease of programming as compared to current OpenCL and CUDA based solutions. Often functions have to be programmed twice, once for the GPU and once for the CPU, and then results have to be copied over from the individual memory pools so the other unit can read the results attained by the other. This is not only a lot of extra work, but the knowledge needed to adequately do this was typically reserved for elite level programmers with a keen understanding of the two different programming models."