Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Robot Love
#1
I want to be a robot escort!
[Image: starship_london_2.0.jpg]
Quote:Robots will start delivering food to doorsteps in Silicon Valley and Washington, D.C. today


Quote:The idea is that one day soon these autonomous rovers will share sidewalk space with pedestrians on their own, but for now, they’ll be accompanied by handlers— people walking alongside 


http://www.recode.net/2017/1/18/14306674...con-valley
I'm nobody's pony.
Reply
#2
DM, do you remember this place I took you past on the way home from last year's Bruce Lee night Giant's game? (less than a mile from our house)
Quote:Man arrested after scuffle with security robot
Mountain View-based Knightscope's robot has minor damage
by Kevin Forestieri / Mountain View Voice
A Mountain View resident was arrested last week after he had allegedly knocked over a 300-pound security robot stationed outside of Knightscope's Mountain View office, according to police.

Police received reports of the man-on-robot assault around 8:15 p.m. on Wednesday, April 19. The suspect, later identified as 41-year-old Jason Sylvain, had allegedly knocked over one of the robots in front of the Knightscope building, and one of the company's employees detained Sylvain until officers arrived, according to police spokeswoman Katie Nelson.

[Image: 13145_main.jpg]
https://www.mv-voice.com/news/2017/04/26...rity-robot
I'm nobody's pony.
Reply
#3
Meant to post that here too but only got as far as my Robot thread on KFM. 

http://www.kungfumagazine.com/forum/show...ost1302568
Shadow boxing the apocalypse
Reply
#4
New report finds no evidence that having sex with robots is healthy
[Image: 32E6X4OW7E6IZFX5OU5JD7I5VQ.jpg]
https://www.washingtonpost.com/news/spea...c13e415533

Fuck this shit.  They obviously didn't do a proper and thorough study.
--cranefly
I'm nobody's pony.
Reply
#5
Quote:Meet the robot that can write poetry and create artworks
By Hannah Ryan, CNN

Updated 11:24 AM ET, Sat November 27, 2021
[Image: 211127084013-01-ai-da-robot-file-102321-exlarge-169.jpg]
Ai-Da went on display at the Great Pyramids of Giza in Cairo, Egypt, on October 23, 2021, as part of an exhibition presented by the organization Art D'Egypte in partnership with the Egyptian Ministry of Antiquities and Tourism.

(CNN)When people think of artificial intelligence, the images that often come to mind are of the sinister robots that populate the worlds of "The Terminator," "i, Robot," "Westworld," and "Blade Runner." For many years, fiction has told us that AI is often used for evil rather than for good.
But what we may not usually associate with AI is art and poetry -- yet that's exactly what Ai-Da, a highly realistic robot invented by Aidan Meller in Oxford, central England, spends her time creating. Ai-Da is the world's first ultra-realistic humanoid robot artist, and on Friday she gave a public performance of poetry that she wrote using her algorithms in celebration of the great Italian poet Dante.
The recital took place at the University of Oxford's renowned Ashmolean Museum as part of an exhibition marking the 700th anniversary of Dante's death. Ai-Da's poem was produced as a response to the poet's epic "Divine Comedy" -- which Ai-Da consumed in its entirety, allowing her to then use her algorithms to take inspiration from Dante's speech patterns, and by using her own data bank of words, create her own work.
Ai-Da's poem was described was "deeply emotive" by Meller and includes the following verse:

"We looked up from our verses like blindfolded captives,
Sent out to seek the light; but it never came
A needle and thread would be necessary
For the completion of the picture.
To view the poor creatures, who were in misery,
That of a hawk, eyes sewn shut."

Meller said that Ai-Da's ability to imitate human writing is "so great, if you read it you wouldn't know that it wasn't written by a human" and told CNN that when Ai-Da was reading her poem on Friday evening, "it was easy to forget that you're not dealing with a human being."
[Image: 211127084016-02-ai-da-robot-file-2019-exlarge-169.jpg]
Aidan Meller poses with Ai-Da during a launch event for its first solo exhibition in Oxford on June 5, 2019.
"The Ai-Da project was developed to address the debate over the ethics of further developing AI to imitate humans and human behavior," Meller told CNN. "It's finally dawning on us all that technology is having a major impact on all aspects of life and we're seeking to understand just how much this technology can do and what it can teach us about ourselves."
Meller said one key thing he and the team that work with Ai-Da have learned while developing her is that the project hasn't taught them how "human she is -- but it's shown us how robotic we are as humans."
As Ai-Da has learned how to imitate humans based on our behavior, Meller says the project has shown just how habitual human beings are and how we tend to repeat actions, words, and patterns of behavior -- suggesting that it is we, in fact, who are robotic.
"Through Ai-Da and through the use of AI, we can learn more about ourselves than ever before -- Ai-Da allows us to gain a new insight into our own patterns and our own habits, as we see her imitate them right in front of us," Meller told CNN.
Not only can Ai-Da read and write poetry -- she is also capable of creating artworks, too, and made one for the Dante exhibition titled "Eyes Wide Shut" which was crafted in response to an incident in Egypt in October, when Egyptian security forces detained Ai-Da and wanted to remove the cameras in her eyes due to concerns over surveillance and security.
"The incident showed just how much nervousness there is in the world around technology and its advancements," Meller said.
Meller is aware, too, of the concerns over the increasingly advanced development of artificial intelligence and the potential for using algorithms to manipulate populations but he said that "technology on its own is benign -- it's those that control it whose intentions could be morally and ethically questionable."
[Image: 211127084016-03-ai-da-robot-file-2019-exlarge-169.jpg]
Ai-Da is capable of creating artworks and poetry, which she does by using her algorithims to imitate human actions.

According to Meller, when it comes to worries about where the future of AI will take us, "the biggest fear we should have should be of ourselves and the human capability to use technology to oppress, not of the AI itself."
Meller thinks that Ai-Da can be a pioneer in the world of AI and that what she produces -- whether it's poetry, artworks or something else -- will push the boundaries of what can be achieved in technology and will allow us to learn more about ourselves than ever before, all through the eyes of a robot.

Shadow boxing the apocalypse
Reply
#6
This seems like a bad idea.


Quote:Xenobots, the World's First Living Robots, Are Now Capable of Reproducing: 'This Is Profound'
Xenobots have the capacity to reproduce in an "entirely new" way, scientists say — which could prove beneficial in making advancements toward regenerative medicine
By Natasha DadoNovember 30, 2021 02:23 PM

[Image: image?url=https%3A%2F%2Fstatic.onecms.io...robots.jpg]

CREDIT: SAM KRIEGMAN AND DOUGLAS BLACKISTON

The world's first living robots, known as xenobots, have learned how to self-replicate, according to the scientists who developed them.
Xenobots — which are designed by computers and created by hand from the stem cell of the African clawed frog Xenopus laevis, where its name is derived — were introduced to the world in 2020. At the time, scientists announced the organisms were self-healing and could survive for weeks without food, according to CNN.
Now, experts have found that xenobots — which are blob-like in appearance — have the capacity to reproduce in an "entirely new" way, scientists at the University of Vermont, Tufts University, and the Wyss Institute for Biologically Inspired Engineering at Harvard University said Monday in a press release.
Scientists found that the xenobots are able to "gather hundreds" of single cells together and "assemble baby" organisms inside their mouths, which become new and functional xenobots within days, per the press release.

"With the right design—they will spontaneously self-replicate," said Josh Bongard, a computer science professor and robotics expert at the University of Vermont who helped lead the research.
"People have thought for quite a long time that we've worked out all the ways that life can reproduce or replicate. But this is something that's never been observed before," added co-author Douglas Blackiston, Ph.D., a senior scientist at Tufts University and the Wyss Institute.
"This is profound," said Michael Levin, a biology professor and director of the Allen Discovery Center at Tufts University. "These cells have the genome of a frog, but, freed from becoming tadpoles, they use their collective intelligence, a plasticity, to do something astounding."
Although the idea of robots that are able to reproduce on their own may sound frightening, one scientist involved with the research says this does not "keep me awake at night."
"We are working to understand this property: replication. The world and technologies are rapidly changing. It's important, for society as a whole, that we study and understand how this works," Bongard said in the press release, noting that having a better understanding of this kind of self-replicating biotechnology can have many practical uses — including for regenerative medicine.
"If we knew how to tell collections of cells to do what we wanted them to do, ultimately, that's regenerative medicine—that's the solution to traumatic injury, birth defects, cancer, and aging," Bongard added. "All of these different problems are here because we don't know how to predict and control what groups of cells are going to build. Xenobots are a new platform for teaching us."
Shadow boxing the apocalypse
Reply
#7
This is the beginning of the end
In the Tudor Period, Fencing Masters were classified in the Vagrancy Laws along with Actors, Gypsys, Vagabonds, Sturdy Rogues, and the owners of performing bears.
Reply
#8
Now they don't even need electricity!

https://boingboing.net/2021/12/09/tiny-l...icity.html

Quote:Tiny liquid robots swim around powered by "food" in the water instead of electricity
David Pescovitz




Researchers have built tiny "liquid robots" that can operate continuously without needing any electricity. Instead, they get their power from chemical processes fueled by "food" they collect from the liquid in which they're swimming.  According to the engineers from Berkeley Lab, the 2 millimeter "liquibots" could be used for chemical screening, to discover drugs, or to synthesize new pharmaceuticals by shuttling other chemicals around within a solution. From Berkeley Lab:
Quote:Through a series of experiments in Berkeley Lab's Materials Sciences Division, Russell and first author Ganhua Xie, a former postdoctoral researcher at Berkeley Lab who is now a professor at Hunan University in China, learned that "feeding" the liquibots salt makes the liquibots heavier or denser than the liquid solution surrounding them.

Additional experiments by co-investigators Paul Ashby and Brett Helms at Berkeley Lab's Molecular Foundry revealed how the liquibots transport chemicals back and forth.

Because they are denser than the solution, the liquibots – which look like little open sacks, and are just 2 millimeters in diameter – cluster in the middle of the solution where they fill up with select chemicals. This triggers a reaction that generates oxygen bubbles, which like little balloons lift the liquibot up to the surface.

Another reaction pulls the liquibots to the rim of a container, where they "land" and offload their cargo.


--tg
Reply
#9
Is this story the prologue or chapter 1 of "Fall of Man"?
As a matter of fact, my anger does keep me warm

Reply
#10
It would have been great if they cast Richard Grant as C3P0



--tg
Reply
#11
https://www.dailystar.co.uk/news/weird-n...t-27173069


Quote:Brits can't take world's first sex robot seriously given doll's 'Glasgow accent'
Craig Williams

A bizarre video of a talking sex robot has done the rounds on social media as it appeared to have a Glaswegian accent - leaving many in hysterics.

The clip, showing Realbotix CEO Matt McMullen discussing how the Harmony 2.1 robot will work, was shared alongside the caption: "The first sex robots are about to hit the market."

Matt explains how an app can be connected to the sex robot, allowing people to have a "conversation with it".

The robot then "comes to life" and says (in an undeniably Scottish accent): "Glad you came back so fast baby, I'm glad you came back that fast.

The sex robot can be connected to an app to allow users to engage in conversations with it

"Wow baby, ten minutes without you seems like an eternity."

Since being posted to Twitter yesterday, the video has racked up over four million views, over 15,000 retweets and 17,000 likes, reports GlasgowLive.

Glaswegians have been quick to react to the video, with folk in their droves taking to Twitter to express their surprise at the bizarre accent the robot seems to be using.

One wrote: "Why'd they make the robot be from Glasgow."
Promoted Stories

Sex robots are on the market for around $6,149 (£4,884)

Another responded: "American sex dolls with a Glasgow uni accent who woulda thought it".

And a third tweeted: "Sounds like she goes to Glasgow Uni."

Others have pointed out that the sex robot sounds like the automated voice that greets passengers using trains heading for Glasgow Central station.

Another person tweeted: "Why’d they give her that accent. The next train at platform 1 will be - the - 12.15 - to - Glasgow Central."

Some said the robot sounded like the automated voice that greets passengers at Glasgow Central station

Joining them was another who shared the video with the caption: "The next stop is, glasgow central, where this train terminates."

And a fellow Glaswegian did likewise, tweeting: "The next stop is… Glasgow Central Low Level."

Some argued that the accent sounded more Dundee than Glasgow, but regardless everyone was baffled by Realbotix's choice of tone for the new sex robot.

--tg
Reply
#12
https://www.riffusion.com/about

(You have to visit the above link to get the embedded audio samples)


Quote:(noun): riff + diffusion


You've heard of Stable Diffusion, the open-source AI model that generates images from text?
photograph of an astronaut riding a horse

Well, we fine-tuned the model to generate images of spectrograms, like this:
funk bassline with a jazzy saxophone solo

The magic is that this spectrogram can then be converted to an audio clip:

????
Really? Yup.
This is the v1.5 stable diffusion model with no modifications, just fine-tuned on images of spectrograms paired with text. Audio processing happens downstream of the model.
It can generate infinite variations of a prompt by varying the seed. All the same web UIs and techniques like img2img, inpainting, negative prompts, and interpolation work out of the box.
Spectrograms
An audio spectrogram is a visual way to represent the frequency content of a sound clip. The x-axis represents time, and the y-axis represents frequency. The color of each pixel gives the amplitude of the audio at the frequency and time given by its row and column.
The spectogram can be computed from audio using the Short-time Fourier transform(STFT), which approximates the audio as a combination of sine waves of varying amplitudes and phases.
The STFT is invertible, so the original audio can be reconstructed from a spectrogram. However, the spectrogram images from our model only contain the amplitude of the sine waves and not the phases, because the phases are chaotic and hard to learn. Instead, we use the Griffin-Lim algorithm to approximate the phase when reconstructing the audio clip.
The frequency bins in our spectrogram use the Mel scale, which is a perceptual scale of pitches judged by listeners to be equal in distance from one another.
Below is a hand-drawn image interpreted as a spectrogram and converted to audio. Play it back to get an intuitive sense of how they work. Note how you can hear the pitches of the two curves on the bottom half, and how the four vertical lines at the top make beats similar to a hi-hat sound.

We use Torchaudio, which has excellent modules for efficient audio processing on the GPU. Check out our audio processing code here.
Image-to-Image
With diffusion models, it is possible to condition their creations not only on a text prompt but also on other images. This is incredibly useful for modifying sounds while preserving the structure of the an original clip you like. You can control how much to deviate from the original clip and towards a new prompt using the denoising strength parameter.
For example, here is that funky sax riff again, followed by a modification to crank up the piano:
funk bassline with a jazzy saxophone solo


piano funk


The next example adapts a rock and roll solo to an acoustic folk fiddle:
rock and roll electric guitar solo


acoustic folk fiddle solo


Looping and Interpolation
Generating short clips is a blast, but we really wanted infinite AI-generated jams.
Let's say we put in a prompt and generate 100 clips with varying seeds. We can't concatenate the resulting clips because they differ in key, tempo, and downbeat.
Our strategy is to pick one initial image and generate variations of it by running image-to-image generation with different seeds and prompts. This preserves the key properties of the clips. To make them loop-able, we also create initial images that are an exact number of measures.
However, even with this approach it's still too abrupt to transition between clips. Multiple interpretations of the same prompt with the same overall structure can still vary greatly in their vibe and melodic motifs.
To address this, we smoothly interpolate between prompts and seeds in the latent space of the model. In diffusion models, the latent space is a feature vector that embeds the entire possible space of what the model can generate. Items which resemble each other are close in the latent space, and every numerical value of the latent space decodes to a viable output.
The key is that it's possible to sample the latent space between a prompt with two different seeds, or two different prompts with the same seed. Here is an example with the visual model:

We can do the same thing with our model, which often produces buttery smooth transitions, even between starkly different prompts. This is much more interesting than interpolating the raw audio, because in the latent space all in-between points still sound like plausible clips. The figure below is colorized to show the latent space interpolation between two seeds of the same prompt. Playing this sequence is much smoother than just playing the two endpoints. The interpolated clips are often diverse and have their own riffs and motifs come and go.
Here is one of our favorites, a beautiful 20-step interpolation from typing to jazz:

And another one from church bells to electronic beats:

Interpolation of arabic gospel, this time with the same prompt between two seeds:

The huggingface diffusers library implements a wide range of pipelines including image-to-image and prompt interpolation, but we needed an implementation for interpolation combined with image-to-image conditioning. We implemented this pipeline, along with support for masking to limit generation to only parts of an image. Code here.
Interactive Web App
To put it all together, we made an interactive web app to type in prompts and infinitely generate interpolated content in real time, while visualizing the spectrogram timeline in 3D.
As the user types in new prompts, the audio smoothly transitions to the new prompt. If there is no new prompt, the app will interpolate between different seeds of the same prompt. Spectrograms are visualized as 3D height maps along a timeline with a translucent playhead.
The app is built using Next.jsReactTypescriptthree.jsTailwind, and Vercel.
The app communicates over an API to run the inference calls on a GPU server. We used Truss to package the model and test it locally before deploying it to Baseten which provided GPU-backed inference, auto-scaling, and observability. We used NVIDIA A10Gs in production.
If you have a GPU powerful enough to generate stable diffusion results in under five seconds, you can run the experience locally using our test flask server.
Code
Prompt Guide
Like other diffusion models, the quality of the results depends on the prompt and other settings. This section provides some tips for getting good results.
Seed image - The app does image-to-image conditioning, and the seed image used for conditioning locks in the BPM and overall vibe of the prompt. There can still be a large amount of diversity with a given seed image, but the effect is present. In the app settings, you can change the seed image to explore this effect.
Denoising - The higher the denoising, the more creative the results but the less they will resemble the seed image. The default denoising is 0.75, which does a good job of keeping on beat for most prompts. The settings allow raising the denoising, which is often fun but can quickly result in chaotic transitions and tempos.
Prompt - When providing prompts, get creative! Try your favorite artists, instruments like saxophone or violin, modifiers like arabic or jamaican, genres like jazz or rock, sounds like church bells or rain, or any combination. Many words that are not present in the training data still work because the text encoder can associate words with similar semantics. The closer a prompt is in spirit to the seed image And BPM, the better the results. For example, a prompt for a genre that is much faster BPM than the seed image will result in poor, generic audio.
Prompt Reweighting - We have support for providing weights for tokens in a prompt, to emphasize certain words more than others. An example syntax to boost a word is (vocals:1.2), which applies a 1.2x multiplier. The shorthand (vocals) is supported for a 1.1x boost or [vocals] for a 1.1x reduction.
Parameters can also be specified via URL, for example:
https://www.riffusion.com/? &prompt=rainy+day& denoising=0.85& seedImageId=og_beat

Examples
The app suggests some of our favorite prompts, and the share panel allows grabbing the spectrogram, audio, or a shareable URL. We're also posting some favorites at/r/riffusion.
Here are some longer-form interpolations we like:
Sunrise DJ Set to hard synth solo:

Detroit Rap to Jazz:

Cinematic New York City in a Dust Storm to Golden hour vibes:

Techno beat to Jamaican rap:

Fantasy ballad, female voice to teen boy pop star:

Citation
If you build on this work, please cite it as follows:
@software{Forsgren_Martiros_2022,
  author = {Forsgren, Seth* and Martiros, Hayk*},
  title = {{Riffusion - Stable diffusion for real-time music generation}},
  url = {https://riffusion.com/about},
  year = {2022}
}

Listen:

Quote:Logic is a wreath of pretty flowers that smells bad.


Try some prompts out yourself:

https://www.riffusion.com

--tg
Reply
#13
I used to get accosted (asked if I wanted help) by the Orchard Supply robot regularly.

For a while I was seeing those little delivery robots downtown, but I changed my commute route and haven't seen any in at least a year.
the hands that guide me are invisible
Reply
#14
(12-21-2022, 09:18 AM)King Bob Wrote: I used to get accosted by the Orchard Supply robot regularly.

This sounds like the opening line of a cf short story...
Shadow boxing the apocalypse
Reply
#15
Shadow boxing the apocalypse
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)