Here are some links to web based programs that I use.
1. compressor.io - This does a great job of compressing jpegs, much better than Photoshop does.
2. pdfcompressor.com - Does what it says it does. Much better than the native compression for web that Adobe uses.
3. Tinypng.com - Is actually a plugin for PhotoShop that compresses large png files. Again, better than Photoshop.
4. https://www.pingdom.com/ - Site goes done but your hosting company doesn't inform you? Set up a free account and have pingdom email you.
1. Lyrebird: https://www.youtube.com/watch?v=VnFC-s2nOtI -
I know, I hate robot voices too but this was the only video I could find on what is going on in Delft. Be aware also that when you here or read that this will make things more "secure" that is a deflection of what is really happening. No one wants to find out that they are investing billions into a satellite, 5G or fiber infrastructure that may be obsolete in as early as 2025. So, let's pretend its about security and not speed.
I have been researching Starlink, SpaceX's new network, over the weekend in detail and I believe that Starlink is actually a possible threat to the cable industry and its broadband offering.
Due to broadband, I believed that the cable industry was safe due to the revenue derived as the industry moves its entertainment to OTT but I am not so sure now.
Google is a Starlink investor. They have also stopped building out their fiber network. This may explain why they change their position on their build out.
Starlink just did a test from its satellites to ground over Los Angeles asking people to login (Elon announced via Twitter) if they see the Wi-Fi network (a way to test capacity using the public). They will eventually be using a combination of LEO and geostationary orbits. In space photons move at 50 times the speed of fiber. With a mesh network of satellites with phase array antennas. This leapfrogs over existing fiber and local 5G to the home making cable obsolete except for local networking. Elon just tweeted that the latency was less than 2.5 milliseconds meaning that one could play networked video games on the ground and not see any latency (see link at bottom of page).
They have built in upgrades to the LEO satellites every few years and since they own the reusable rockets they have to markup on cost. As reusability of the rockets increases over time, the cost becomes less expensive than the cable alternative.
If I am right, we have until 2023 before the second phase rolls out to consumers directly. The first phase is something the cable industry may be interested in since this is designed for industrial use. SpaceX expects to sign up more than 40 million subscribers to its broadband service by 2025 with 30 Billion in revenue.
My main focus right now is trying to get information on the Low earth orbit to ground communication technology laser that they are using and how that may differ from what traditional companies are using.
Please read this: http://www.cobizmag.com/Trends/The-New-Internet-from-Space/
Elon plans to use this as his future Mars communication network with a chain of communication relays between Earth and Mars.
Starlink's competitor OneWeb cannot compete unless they provide subsidized launches. Boeing and Apple have the same issue. I assume they will sell their infrastructure at some point to Starlink because of the inability to compete in building infrastructure. The 30 Billion of projected income Starlink projects I can safely assume a few will come from existing cable companies.
Quantum Communication via Satellite
This article talks about how China is experimenting with quantum teleportation not for encryption but for instantaneous communication between satellites and ground.
I admit this is hard to follow when reading it (at least for me) but it shows that at some point this is where the tech is headed. Phase array communication is the most common right now and no one is using quantum except the military.
I would scroll down and just look at the graphic called: Quantum Leaps.
Starlink satellites are replaced completely every seven years and at any time can be upgraded with new technology.
Replying to @nitantbhartia @ninoles and 2 others
“Pretty good. TinTin A & B are both closing the link to ground w phased array at high bandwidth, low latency (25 ms). Good enough to play fast response video games.”
Elon Musk – May 26, 2018
I looked into Amazon Sumarian and it will likely fail. They're using the same idea I had for expanding Aframe.io webvr editor but instead of trying to make money on the program or app they are trying to force people to use their servers for storage and charging a crap load of money for it too if you want to actually use it for anything more than a basic experience.
So, designers won't use it because they lack full control and can't host on their own servers. Any one who knows how inexpensive aframe is won't use it due to the extra cost.
If they had sold it as a stand alone app with the option to allow them to host the data easily and kept their cost competitive they might have had something but they force the developer to use their servers.
It's sad how poor implementation can ruin a great idea.
Basically, the idea is that coding once allows for 2D version, Handheld WebXR/AR, 6DoF Headset and 3DoF headsets. Similar to adaptive design in webdesign.
Imagine you wanted to have your store’s web page work in 2D, and also take advantage of the full range of AR and VR devices. WebXR will provide the foundation you need to create pages that work everywhere, and let you focus on compelling User Experiences on each of the devices.
One aspect of progressive WebXR, showcasing a version of A-Painter that was adapted to handheld AR and immersive VR was amazing. In this post, lets dive a bit deeper into the idea of progressive WebXR apps that are accessible across a much wider range of XR-supported devices.
The WebXR Device API expands on the WebVR API to include a broader range of mixed reality devices (i.e., AR/VR, immersive/handheld). By supporting all mixed reality devices in one API, the Immersive Web community hopes to make it easier for web apps to respond to the capabilities of a user’s chosen device, and present an appropriate UI for AR, VR, or traditional 2D displays.
At Mozilla, this move aligns with experiments started last fall, when Mozilla created a draft WebXR API proposal, a WebXR polyfill based on it, and published a WebXR Viewer experimental web browser application to the iOS App Store. Publishing the app for iOS allowed them (and others) to experiment with WebXR on iOS, and is one of the target platforms for the XR Store demo that is the focus of this article. This demo shows how future sites can support the WebXR API across many different devices.
Before introducing the example store we've create, I’ll give an overview of the spectrum of devices that might need to be supported by a UX strategy to design this kind of WebXR-compatible site.
See full article with link at top.
I shared this idea of using low intensity laser to beam images and video directly onto a persons eyes. The laser is harmless but scares people when they hear about it. My idea is that someday you might have four tracker/beamers that track your eyes in a room from four locations and beam images to your eyes. This is better for an AR experience because laser images can be opaque and appear like any image in the real world.
If the trackers track the eye they could also track hand movement so you can interact with the AR/VR world. Possibly using acoustics to add the feeling of touch. No doubt lasers can also do this as well. The downside is that we do not at the moment have the speed to track pupils in eyes of people at a distance.
So, in comes Vaunt. The laser is built into the glasses so a very small amount of adjustment is needed and you don;t need tracking for the laser.
WebXR will be replacing WebVR as a web standard. This has been sanctioned by the W3C.
Virtual Reality and Augmented Reality are two distinct technologies that overlap each another. Both work with devices you wear on your face and both use sensors that track your movement, location, and orientation.
The difference between the two is that VR creates completely new simulations of reality while AR layers content on top of the existing world. The overlapping technologies used for encounter the same challenges, so the W3C VR group decided to create an API that deals with both: WebXR.
XR, or Extended Reality, is a term that encapsulates both types of devices and allows developers to build APIs that you can leverage regardless of if you’re building an AR, VR or Mixed Reality experience.
Unlike WebVR you need a dev browser in order to see it. WebXR release date is likely late Fall. WebXR allows for AR and VR across all platforms.
Does this effect Aframe from aframe.io? Yes. Here are demos that work now directly in headsets and in the browser.
Mozilla has already released an iOS library (using ARKit) which allows experimenting with WebXR.
Mozilla has created an app using ARKit for iOS embodying the new standard.
NOTE: at present, the app is needed to do the test – no browser works with it (yet).
And here’s the download link for the WebXR Polyfill, which will work on the downloaded iOS app:
The WebXR is going to be a standard, and within a year we can expect all the browser manufacturers (except, ironically, Apple) to have WebXR working in their desktop and mobile versions.
Unlike Apple, Microsoft, like Mozilla has been pushing open standards for VR and AR on the web (Apple would like you to remain in closed native apps). A good example is the recent announcement of Simplygon, a cloud-based optimizing service for 3D models.
Now, this is really important for WebVR and AR sustainability. 3D models are very large, and a future VR / AR web will need massive optimization, even more than we do for images and libraries today. Optimization is a complex process, and this service moves things a bit closer to ordinary designers and developers creating 3D worlds, then optimizing them for “streaming VR” delivery.
A sample library on Github = http://github.com/pindiespace/webvr-mini. The file size for all the WebGL + WebVR is under 300k minified.
Josh Carpenter at AWWWARDS speaking about WebXR
https://github.com/immersive-web Immersive Web Community Group
(formally WebVR Comm Group)
People to know
Josh Carpenter Google Daydream, Ex-Mozilla Firefox. Started webVR in headsets
Brandon Jones Google (Built WebVR APIs four years ago)
Kevin Ngo @andgokevin https://www.supermedium.com/ formally of Mozilla, Aframe.io
Diego Marcos @dmarcos https://www.supermedium.com/ formally of Mozilla, Aframe.io
Tony Parisi Creator of VRML (1990s), Co-Creator of GLTF (jpg for 3D models) https://tonyparisi.wordpress.com/
I will be guest speaker talking about the future of AR and VR and potential business opportunities.
Join me to learn about Virtual and Augmented Reality trends and where this creative tech is going! Hear about some of the new creative/business opportunities in the future plus see a VR demo!
Last night was the second Grav Meetup ever held. The Meetup was held on the third floor in the Mezzanine overlooking the Great Hall.
I have high hopes for this CMS system. First of all it has no database. This is a good thing because of the proliferation of hacking that is now occurring. The Meetup is led by Andy Miller. Andy envisioned and developed the modern open source flat-file CMS platform "Grav". With the help of two others he is beginning to give WordPress a run for it's money. Andy also grew RocketTheme, one of only two main Joomla template companies, from a one man operation into a thriving international business with 15 full time employees, and another 15+ part time team members. ANd finally his most noted accomplishment was being a Co-Founder of Joomla CMS, one of the three major CMS systems in use today. The other two being Drupal and WordPress.
We had Kevin from CU there shooting video to show to CU developers because of their great interest in replacing WordPress with Grav for Universities.
What we learned
The admin system is usable or not. The folder containing the admin system can be removed or renamed with no harm to the content or the admin back-end.
We learned, in general, how to theme, what blueprints are and how they are used to setup pages.
Andy requested help with documentation but no one volunteered. I believe at this point everyone at the Meetup is still trying to get up to speed on what it is and how it works.
The Grav system takes all of the data from all of the pages, combines them and rewrites the data to make it server data faster. On the front end the user sees nothing.
hwAlmost my birthday. My birthday is April 8, 1957. The date has an odd ring to it for me. I'm not sure why. Possibly it's the "A" and "eight" maybe. 1957 was ancient history I am very aware of that.
My wife considers me an alien because I simply do not age, at least not mentally. I like texting over talking over the phone. I am always wishing the future would come faster. I saw the Internet and what it would be before almost everyone around me and I expended a lot of energy trying to get people to see it. I was frustrated and angry when supposedly smart people would say stupid things like, "Why would anyone want to look at video on a computer, a phone, postage stamp size?" in response to my explanation of the web in the future (back in 1993).
I was scoffed at by engineers who schooled me that video would never go through the web because the "pipe" was too small, that 28K modem speed was as fast as the internet would ever get.
At times people listened though. When Toy Story came out someone asked me if they should buy stock in Pixar. This was like someone asking me if fire would keep them warm and should they light one. I was always stunned at questions like these.
Lately it has been should I buy stock in NVIDIA. With VR, CryptoCurrency and AI coming like a Suomi that also seems like a "fire" question.
I love working in AFrame for WebVR. I think it has incredible potential for flat normal web design as well. The fact that you can virtually punch a hole in flat space in a browser and create a vast three dimensional space is amazing to me. Check this out: This is an older model of our building made for the VR Exhibit converted to the new GLTF format and embedded in a WebVR/Aframe web page. It takes about three seconds to load. The rotating blue cubes are buttons that you can click with your mouse that teleport you in space. You can also use the WASD keys and mouse to click and drag to view. https://www.cablecenter.org/vr/gltf/
If you read my last article you know that I managed to build a winter scene, add png based characters, add hands and grabbing capability and teleportation. If you don't launch into VR you can use the WASD keys to move around.
I was excited to be able to add teleport. Tuesday night I gave a talk at the Denver VR Meetup about A-Frame and why I believe the Web will eat gaming in VR in 2018.
Once I showed teleport to the group in VR inside of a web page I could see that the group was interested. Before the demo people seemed to ignore me when I would bring this up.
So, I thought hands were great but I wanted my controllers and not cartoon hands. I went back and studied the examples on A-Frame and found A-Painter. In A-Painter I saw that the program recognized my controller and added controls almost identical to TiltBrush. When I saw the tooltips explaining the controller functions I was amazed. So, not only could I teleport but I could create a modified menu based on the controller. I downloaded the Github version and tested it on my desktop. It worked like a charm.
I tore the code apart and found in the build.js file code that checked controller type and saw that it was setup for Vive as well. I found the all of the tooltip and teleport components in the vendor folder.
On A-Frame's Slack channel I found code that acts as a point and click interface. This code also checks controller type. I plan on doing an extensive explanation of how the code works to identify controller type as well as how to add a function to each controller button and trigger.
Here's a link to code on CodePen: https://codepen.io/luiting57/pen/dWLVwr
Rumour has it that all of the controller and teleport functionality will be included in the core when version 6.0 comes out.
But, the open web has a much greater number of people. And this can be viewed in 2D as well.
As a web designer and developer as well as ex-Flash animator I decided to start a blog here just to keep people updated about my journey in programming.
Another test: https://www.cablecenter.org/vr/holiday2017/
A-Frame had similar issues with png files taking on the colour of the background behind the image but Kevin Ngo ( @andgokevin ) sent me some code that changed the transparency. I'm not sure how it works but work it does. Kevin deserves a lot of credit. I have never seen anyone work as hard as he does. If I have a question he usually responds within hours. I ask questions on their Slack channel as well as other places and somehow he seems to be almost everywhere. So, I think Mozilla is lucky to have him working there.
<a-camera look-controls wasd-controls="fly: true"></a-camera>
How did I add fog?
<a-scene fog="color: #a6b7d0; near: 0; far: 65;">
Code snippet for hands and Vive/Oculus teleport?
<a-entity hand-controls="left" ui teleport-controls></a-entity> <a-entity hand-controls="right" ui teleport-controls></a-entity>
In the header, I include the component like so:
I included the direct link because the version at GitHub from the documentation did not work for me (I think it's Vive only). You can add controllers but as of right now I think it hard codes the controller type but the way I'm doing it with hands allows for both. I think a physical representation of say, Oculus controllers when you're using Vive controllers is awkward.
Also, feel free to grab me and pick me up. I'm the cartoon with glasses.
My bet is on Ethereum and IPFS but it looks like Wall Street big money is backing Blockstack. It could eventually shake out to one winner but gone are the days of only one Internet platform. Soon, and I mean within a year, we will have multiple forms of Internet all based on blockchain technology from Bitcoin.
Blockstack claims that Ethereum code is too complex and fragile. I don't believe for a second that "booting up" Ethereum will take forever or that security is an issue. The blog post below tries to argue the difference and tries to also prove that their solution is better.
I personally believe both versions are better than what we have at present. Still, as a professional I can see that neither is user-friendly. I tried to test Blockstacks browser but cannot get access. IPFS looks interesting but as far as I know we have no browser that uses it. I think I heard about a chrome extension but I'm not clear on that either. How do I get on the IPFS network without setting up a server with command line code? Does anyone know? Why does Namecoin browser load the way it does? Where is my executable like Firefox has and Chrome?
My wish is that these people could work together and create a common solution. But, I assume that in the near future we will have five versions of Internet all based on different versions of blockchain. The common web will disappear or we may see a main branch of Internet similar to a main cryptocurrency (like bitcoin) that people jump onto in order to access the other versions or flavors on the web.
Or the winner of the web platform war may be a browser that can read all versions of decentralized blockchain based servers and networks.
I remember when we only had three channels to watch on TV and only one internet to jump onto.
I've been holding my tongue for some time now when listening to so-called "experts" in storytelling as they stand on stages blathering about 360 videos being so different from 2D video and film.
The first time I heard from someone that you couldn't do the same things in 360 that you can in 2D I was shocked and had to laugh. The first person I heard this from was a woman who was powerful in the film industry. I said nothing to her because I thought she would eventually figure out that she was wrong. But now, the entire industry seems to be parroting her because it must either make them think they sound intelligent or they are simply not very bright and assume the "experts" are correct.
Yes, movement and cuts were a problem with low frame rates but not anymore. If you have a machine fast enough frame rate shouldn't be an issue.
To my point, if you were seeing 2D film for the first time and you lived one hundred years ago at the birth of this art form, cuts, fades and whatever would all be confusing and yes might have even made you sick. Why doesn't this happen now? because we've learned the language of cinema. If a cut happens and a new scene appears it's because we are in a new location. We know this instinctually. Why do people think people ran out of the theater when a train came at them on screen? We had no language of cinema at the time.
In my opinion, anything you do in 2D can be done in 360 video once the public and apparently people in the business of film have used VR enough. Until the realization sets in, I would help the public get used to 360 video by slowing down fade out and fade ins as well as cuts. Possibly use blur techniques before the cut and after. Provide a visual cue such as a post where a person will be in the line of sight of the viewer. In other words, create a solid visual cue that the eye focuses on as the cut happens that blends to another visual cue after the cut.
Please don't recreate the past by using the one camera facing a live play technique but instead help recreate a similar visual language for the new form of surround cinema called 360 video. I would like to think we can learn from history.