> ICARUS AND
> COMIC AND
> ARTS ETC

i draw scifi comics with markers and paint and stuff. hi. ask me things.

stephenhawqueen:

the US is unreal like girls cant wear shorts to school, you can literally lose your job for being gay, and unarmed black children are brutally murdered on the regular but old white ppl r still like “what a beautiful country. i can freely carry a gun for no reason and some of our mountains look like presidents. god bless”

correction: our mountains which we stole from the native people and desicrated specifically as a huge ‘fuck you’ to the lakota sioux.

(via ohgoditsrabid)

2 hours ago
284,953 notes

wow FR randos

dont be greedy and get all entitled to more than one hatchling, freakin rude

edit: & then post angry comments on my FR page because i told them i was saving the rest for other people who might want them? way to prove you dont feel entitled, rando. :| 

23 hours ago
2 notes

hurraaid:

jimeng-xi:

have some cute pitbulls!

tHANK YOU!!!!!

(via smussol)

1 day ago
19,158 notes

thephooka:

not-fun asked over on my Patreon if I had ever released any material on the design on Numair’s sash in these past few White Noise pages, and the answer is: kind of? I think I mumbled about the nine-bars-and-laar motif in an old Tape Hiss post somewhere, but that’s it. Anyway!

The d’Escala family’s crest is a gold laar under nine bars on a purple field. The nine bars are actually stylized tubular bells, because Escala himself was a bell founder before he became ruler of Aetheri, though that’s a story for later. c: Long story short, pretty much everything in the Palace has got some stylized version of the Escalan crest on it; the d’Escala have been in power for about 7000 years now, so naturally it’s everywhere. Numair wears it somewhat ironically.

The other three crests here belong to the families descended from Escala’s siblings—clockwise from top right, the du Russi, the d’Ubis, and the du Cuppra. The du Russi and d’Ubis are still very much around; the du Russi (like Vlad) are considered the most powerful family next to the ruling one, and the d’Ubis (like our dear captain in these past few updates) now have really deep, conservative ties to the military. Which is ironic, because Ubis himself founded the university and pioneered healthcare as a right for all living creatures.

We won’t see any du Cuppra though. Cuppra married outside the species (a fact that will be covered later this chapter, probably) and had no descendants, though there were a sort of lineage of followers that adopted the crest and colors. Today Cuppra’s the patron of mixed species folk, interspecies couples, and immigrants, although there is a sort of cult forming around their legacy…………..

1 day ago
14 notes

What people think recovery looks like vs. what it really looks like

(Source: dangergays, via haxardagron)

14 hours ago
5,675 notes
not-fun:

i have no idea what the NEW alien guide this month will be, but i know the re-write will be the hekshanians

THE REWRITE IS DONE and there’s a *lot* in the hekshanians-and-rulerism section since it’s Plot Stuff Related.

not-fun:

i have no idea what the NEW alien guide this month will be, but i know the re-write will be the hekshanians

THE REWRITE IS DONE and there’s a *lot* in the hekshanians-and-rulerism section since it’s Plot Stuff Related.

18 hours ago
22 notes
mindblowingscience:

Ethical trap: robot paralysed by choice of who to save

Can a robot learn right from wrong? Attempts to imbue robots, self-driving cars and military machines with a sense of ethics reveal just how hard this is
CAN we teach a robot to be good? Fascinated by the idea, roboticist Alan Winfield of Bristol Robotics Laboratory in the UK built an ethical trap for a robot – and was stunned by the machine’s response.
In an experiment, Winfield and his colleagues programmed a robot to prevent other automatons – acting as proxies for humans – from falling into a hole. This is a simplified version of Isaac Asimov’s fictional First Law of Robotics – a robot must not allow a human being to come to harm.
At first, the robot was successful in its task. As a human proxy moved towards the hole, the robot rushed in to push it out of the path of danger. But when the team added a second human proxy rolling toward the hole at the same time, the robot was forced to choose. Sometimes, it managed to save one human while letting the other perish; a few times it even managed to save both. But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole. The work was presented on 2 September at the Towards Autonomous Robotic Systems meeting in Birmingham, UK.
Winfield describes his robot as an “ethical zombie” that has no choice but to behave as it does. Though it may save others according to a programmed code of conduct, it doesn’t understand the reasoning behind its actions. Winfield admits he once thought it was not possible for a robot to make ethical choices for itself. Today, he says, “my answer is: I have no idea”.
As robots integrate further into our everyday lives, this question will need to be answered. A self-driving car, for example, may one day have to weigh the safety of its passengers against the risk of harming other motorists or pedestrians. It may be very difficult to program robots with rules for such encounters.
But robots designed for military combat may offer the beginning of a solution. Ronald Arkin, a computer scientist at Georgia Institute of Technology in Atlanta, has built a set of algorithms for military robots – dubbed an “ethical governor” – which is meant to help them make smart decisions on the battlefield. He has already tested it in simulated combat, showing that drones with such programming can choose not to shoot, or try to minimise casualties during a battle near an area protected from combat according to the rules of war, like a school or hospital.
Arkin says that designing military robots to act more ethically may be low-hanging fruit, as these rules are well known. “The laws of war have been thought about for thousands of years and are encoded in treaties.” Unlike human fighters, who can be swayed by emotion and break these rules, automatons would not.
"When we’re talking about ethics, all of this is largely about robots that are developed to function in pretty prescribed spaces," says Wendell Wallach, author ofMoral Machines: Teaching robots right from wrong. Still, he says, experiments like Winfield’s hold promise in laying the foundations on which more complex ethical behaviour can be built. “If we can get them to function well in environments when we don’t know exactly all the circumstances they’ll encounter, that’s going to open up vast new applications for their use.”
This article appeared in print under the headline “The robot’s dilemma”

Watch a video of these ‘ethical’ robots in action here

mindblowingscience:

Ethical trap: robot paralysed by choice of who to save

Can a robot learn right from wrong? Attempts to imbue robots, self-driving cars and military machines with a sense of ethics reveal just how hard this is

CAN we teach a robot to be good? Fascinated by the idea, roboticist Alan Winfield of Bristol Robotics Laboratory in the UK built an ethical trap for a robot – and was stunned by the machine’s response.

In an experiment, Winfield and his colleagues programmed a robot to prevent other automatons – acting as proxies for humans – from falling into a hole. This is a simplified version of Isaac Asimov’s fictional First Law of Robotics – a robot must not allow a human being to come to harm.

At first, the robot was successful in its task. As a human proxy moved towards the hole, the robot rushed in to push it out of the path of danger. But when the team added a second human proxy rolling toward the hole at the same time, the robot was forced to choose. Sometimes, it managed to save one human while letting the other perish; a few times it even managed to save both. But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole. The work was presented on 2 September at the Towards Autonomous Robotic Systems meeting in Birmingham, UK.

Winfield describes his robot as an “ethical zombie” that has no choice but to behave as it does. Though it may save others according to a programmed code of conduct, it doesn’t understand the reasoning behind its actions. Winfield admits he once thought it was not possible for a robot to make ethical choices for itself. Today, he says, “my answer is: I have no idea”.

As robots integrate further into our everyday lives, this question will need to be answered. A self-driving car, for example, may one day have to weigh the safety of its passengers against the risk of harming other motorists or pedestrians. It may be very difficult to program robots with rules for such encounters.

But robots designed for military combat may offer the beginning of a solution. Ronald Arkin, a computer scientist at Georgia Institute of Technology in Atlanta, has built a set of algorithms for military robots – dubbed an “ethical governor” – which is meant to help them make smart decisions on the battlefield. He has already tested it in simulated combat, showing that drones with such programming can choose not to shoot, or try to minimise casualties during a battle near an area protected from combat according to the rules of war, like a school or hospital.

Arkin says that designing military robots to act more ethically may be low-hanging fruit, as these rules are well known. “The laws of war have been thought about for thousands of years and are encoded in treaties.” Unlike human fighters, who can be swayed by emotion and break these rules, automatons would not.

"When we’re talking about ethics, all of this is largely about robots that are developed to function in pretty prescribed spaces," says Wendell Wallach, author ofMoral Machines: Teaching robots right from wrong. Still, he says, experiments like Winfield’s hold promise in laying the foundations on which more complex ethical behaviour can be built. “If we can get them to function well in environments when we don’t know exactly all the circumstances they’ll encounter, that’s going to open up vast new applications for their use.”

This article appeared in print under the headline “The robot’s dilemma”

Watch a video of these ‘ethical’ robots in action here

(via keetah-spacecat)

1 day ago
564 notes

i have a bunch of hatchlings i need to get rid of on FR. mostly coatls and imps, but there’s a pair of fae and a tundra in there too.

lemme know if anyone wants one. price will prolly be like 3k, first come first serve.

1 day ago
2 notes