Technically Feasible Huge Next Generation Space Station 0 467

Technically Feasible Huge Next Generation Space Station

Technically Feasible Huge Next Generation Space Station

Brian Wang |
August 23, 2019 |

The Goal of Gateway Foundation Von Braun Station is to build a dual-use station that is economically self-sustaining.

This space station would have 11 million cubic meters of pressurized volume versus 931 meters for the International space station. This would be 12,000 times larger volume. It would be 488 meters in diameter. It would have 1.6 times the diameter of the largest stadium dome (300 meters in diameter) in the world which is a sports stadium in Singapore.

It will have the volume of about 20 of the largest cruise ships.

The space station would spin like the space station in 2001. It would generate its own gravity. It would be designed to very comfortably hold 1500 staff and guests.

They have created a technically feasible engineering design. Von Braun Station creation will also form a space construction industry with bots, pods, drones, construction arms, new space suits, and large-scale truss building machines designed for building large structures in space.

This video, and the ones that will follow, were made for NASA and other aerospace engineers to see that this is a technically feasible design, with a solid business plan, that will allow NASA and other space agencies to buy, rent, or lease space on this station very affordably.

Later, when they have acquired the low-gravity data they need, they can move on to other projects with no binding financial ties.

We ask you all the same two questions:


1. Is this a good design?


2. Is the time right to build a rotating station to acquire valuable, low-gravity, human physiology data?

Read next:



«

Read More

Previous ArticleNext Article

Leave a Reply

Your email address will not be published. Required fields are marked *

EU lawmakers are eyeing risk-based rules for AI, per leaked white paper 0 8

EU lawmakers are eyeing risk-based rules for AI, per leaked white paper

The European Commission is considering a temporary ban on the use of facial recognition technology, according to a draft proposal for regulating artificial intelligence obtained by Euroactiv.

Creating rules to ensure AI is ‘trustworthy and human’ has been an early flagship policy promise of the new Commission, led by president Ursula von der Leyen.

But the leaked proposal suggests the EU’s executive body is in fact leaning towards tweaks of existing rules and sector/app specific risk-assessments and requirements, rather than anything as firm as blanket sectoral requirements or bans.

The leaked Commission white paper floats the idea of a three-to-five-year period in which the use of facial recognition technology could be prohibited in public places — to give EU lawmakers time to devise ways to assess and manage risks around the use of the technology, such as to people’s privacy rights or the risk of discriminatory impacts from biased algorithms.

“This would safeguard the rights of individuals, in particular against any possible abuse of the technology,” the Commission writes, adding that: “It would be necessary to foresee some exceptions, notably for activities in the context of research and development and for security purposes.”

However the text raises immediate concerns about imposing even a time-limited ban — which is described as “a far-reaching measure that might hamper the development and uptake of this technology” — and the Commission goes on to state that its preference “at this stage” is to rely on existing EU data protection rules, aka the General Data Protection Regulation (GDPR).

The white paper contains a number of options the Commission is still considering for regulating the use of artificial intelligence more generally.

These range from voluntary labelling; to imposing sectorial requirements for the public sector (including on the use of facial recognition tech); to mandatory risk-based requirements for “high-risk” applications (such as within risky sectors like healthcare, transport, policing and the judiciary, as well as for applications which can “produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage”); to targeted amendments to existing EU product safety and liability legislation.

The proposal also emphasizes the need for an oversight governance regime to ensure rules are followed — though the Commission suggests leaving it open to Member States to choose whether to rely on existing governance bodies for this task or create new ones dedicated to regulating AI.

Per the draft white paper, the Commission says its preference for regulating AI are options 3 combined with 4 & 5: Aka mandatory risk-based requirements on developers (of whatever sub-set of AI apps are deemed “high-risk”) that could result in some “mandatory criteria”, combined with relevant tweaks to existing product safety and liability legislation, and an overarching governance framework.

Hence it appears to be leaning towards a relatively light-touch approach, focused on “building on existing EU legislation” and creating app-specific rules for a sub-set of “high-risk” AI apps/uses — and which likely won’t stretch to even a temporary ban on facial recognition technology.

Much of the white paper is also take up with discussion of strategies about “supporting the development and uptake of AI” and “facilitating access to data”.

“This risk-based approach would focus on areas where the public is at risk or an important legal interest is at stake,” the Commission writes. “This strictly targeted approach would not add any new additional administrative burden on applications that are deemed ‘low-risk’.”

EU commissioner Thierry Breton, who oversees the internal market portfolio, expressed resistance to creating rules for artificial intelligence last year — telling the EU parliament then that he “won’t be the voice of regulating AI“.

For “low-risk” AI apps, the white paper notes that provisions in the GDPR which give individuals the right to receive information about automated processing and profiling, and set a requirement to carry out a data protection impact assessment, would apply.

Albeit the regulation only defines limited rights and restrictions over automated processing — in instances where there’s a legal or similarly significant effect on the people involved. So it’s not clear how extensively it would in fact apply to “low-risk” apps.

If it’s the Commission’s intention to also rely on GDPR to regulate higher risk stuff — such as, for example, police forces’ use of facial recognition tech — instead of creating a more explicit sectoral framework to restrict their use of a highly privacy-hostile AI technologies — it could exacerbate an already confusingly legislative picture where law enforcement is concerned, according to Dr Michael Veale, a lecturer in digital rights and regulation at UCL.

“The situation is extremely unclear in the area of law enforcement, and particularly the use of public private partnerships in law enforcement. I would argue the GDPR in practice forbids facial recognition by private companies in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate. However, the merchants of doubt at facial recognition firms wish to sow heavy uncertainty into that area of law to legitimise their businesses,” he told TechCrunch.

“As a result, extra clarity would be extremely welcome,” Veale added. “The issue isn’t restricted to facial recognition however: Any type of biometric monitoring, such a voice or gait recognition, should be covered by any ban, because in practice they have the same effect on individuals.”

An advisory body set up to advise the Commission on AI policy set out a number of recommendations in a report last year — including suggesting a ban on the use of AI for mass surveillance and social credit scoring systems of citizens.

But its recommendations were criticized by privacy and rights experts for falling short by failing to grasp wider societal power imbalances and structural inequality issues which AI risks exacerbating — including by supercharging existing rights-eroding business models.

In a paper last year Veale dubbed the advisory body’s work a “missed opportunity” — writing that the group “largely ignore infrastructure and power, which should be one of, if not the most, central concern around the regulation and governance of data, optimisation and ‘artificial intelligence’ in Europe going forwards”.

Read More

Where FaZe Clan sees the future of gaming and entertainment 0 11

Where FaZe Clan sees the future of gaming and entertainment

Lee Trink has spent nearly his entire career in the entertainment business. The former president of Capitol Records is now the head of FaZe Clan, an esports juggernaut that is one of the most recognizable names in the wildly popular phenomenon of competitive gaming.

Trink sees FaZe Clan as the voice of a new generation of consumers who are finding their voice and their identity through gaming — and it’s a voice that’s increasingly speaking volumes in the entertainment industry through a clutch of competitive esports teams, a clothing and lifestyle brand and a network of creators who feed the appetites of millions of young gamers.

As the company struggles with a lawsuit brought by one of its most famous players, Trink is looking to the future — and setting his sights on new markets and new games as he consolidates FaZe Clan’s role as the voice of a new generation.

“The teams and social media output that we create is all marketing,” he says. “It’s not that we have an overall marketing strategy that we then populate with all of these opportunities. We’re not maximizing all of our brands.”

Read More