Architecture is not only building and interiors. It requires many other skills to master the skill of design.
Sketching is one of the most important one. It helps to develop ideas and bring it on paper. it is the easiest and quickest way to explain your thoughts.
What do you think about this sketch?
Architect | Expert in Sustainable & Luxury Design | 5+ Years Experience
Every skill develops with practice and creativity is one of them. Its just like a muscle, if we stop imaging and experimenting we will start loosing it with time.
"How Might We..." are the 3 most powerful words in life. It allows for people to start imaging positive possibilities.
The only way to know if it works is to try it.
Curious about the forces at play when moving objects on a table? Our latest experiment delves into the fascinating world of friction and motion, uncovering the force needed to move a wooden block on a horizontal surface. Let’s explore this together by performing the simulation ‘Force Needed to Move a Wooden Block on a Horizontal Table’. Access the virtual labs simulation for free on the DIKSHA portal and app by clicking on the link below: https://lnkd.in/g3j8EA6W#DIKSHA#science#force#NCERT
It was an honour and pleasure to serve as an invited speaker at the 50th Symposium on Machine Diagnostics in Wisla, Poland. My presentation was "Non-uniform embedding in machine diagnostics based on vibration signal analysis". It was a pleasure to meet old friends and make new contacts.
Key takeaways from my presentation:
• Feature extraction algorithms can be used to transform time signals into digital images interpretable by #deeplearning#neuralnetworks
• Non-uniform embedding helps to reconstruct and reveal the internal dynamical properties of the investigated systems
• 2D #permutationentropy plots and color #recurrenceplots do inherit the best properties of non-uniform embedding
#MachineDiagnostics
Hello!
My article in journal is posted!
How see magic titanium process?
For the watching it, i used machine vision system. More information you read in page 134 (https://lnkd.in/ee9RFbW4).
Want a robot to navigate a cluttered room and fetch you something? Presenting SPIN at #CVPR2024, which seamlessly moves past obstacles using active vision & whole-body coordination.
With no mapping or planning, SPIN learns an end-to-end policy in simulation, jointly optimizing perception & action.
Just as toddlers or animals navigate new environments with ease almost like muscle memory, SPIN adapts & reacts on the move. With only ego-vision, it shows reactive and agile local mobility and whole-body coordination. In partially observable environments, SPIN uses an actuated camera for selective perception, optimizing joint action-perception for hand-eye coordination.
With large-scale simulation training using only procedurally generated assets, simple rewards, and efficient randomization, SPIN learns to handle in-the-wild
scenarios. It exhibits emergent behavior like whole-body coordination and dynamic obstacle avoidance.
In collaboration with Ananye Agarwal, Haoyu Xiong, Kenny Shaw, Deepak Pathak
Website: https://lnkd.in/evvUxSwj
Paper: https://lnkd.in/e-KauUGS
Reactive vs Planning is a big debate. As a proponent of bottom-up robotics, I am always curious as to how far we can push the reactive paradigm.
Introducing SPIN (being presented this week at #CVPR2024): end-to-end policy for perception, navigation & manipulation without any mapping or planning.
Want a robot to navigate a cluttered room and fetch you something? Presenting SPIN at #CVPR2024, which seamlessly moves past obstacles using active vision & whole-body coordination.
With no mapping or planning, SPIN learns an end-to-end policy in simulation, jointly optimizing perception & action.
Just as toddlers or animals navigate new environments with ease almost like muscle memory, SPIN adapts & reacts on the move. With only ego-vision, it shows reactive and agile local mobility and whole-body coordination. In partially observable environments, SPIN uses an actuated camera for selective perception, optimizing joint action-perception for hand-eye coordination.
With large-scale simulation training using only procedurally generated assets, simple rewards, and efficient randomization, SPIN learns to handle in-the-wild
scenarios. It exhibits emergent behavior like whole-body coordination and dynamic obstacle avoidance.
In collaboration with Ananye Agarwal, Haoyu Xiong, Kenny Shaw, Deepak Pathak
Website: https://lnkd.in/evvUxSwj
Paper: https://lnkd.in/e-KauUGS
From Sequential Generation to Diffusion: A New Era in Motion Planning
Reading about the transition from traditional RNNs/LSTMs and Transformers to diffusion models in motion planning truly opened my eyes. Diffusion models, with their iterative denoising process, not only bring high-fidelity results but also solve the limitations of sequential dependency found in previous methods. Unlike step-by-step predictions, diffusion models generate the entire trajectory in a more flexible and efficient way. This shift represents a leap forward in handling complex, real-time tasks, like human motion planning. It’s fascinating to see how this method bridges data-driven imitation learning with real-time physical interaction.
https://lnkd.in/esXD_PD7
Google and Stanford presented:
ZeroNVS: Zero-Shot 360-Degree View Synthesis from a Single Image
Google and Stanford present ZeroNVS, a 3D-aware diffusion model for single-image novel view synthesis in diverse real-world scenes. Unlike prior methods focused on single objects, ZeroNVS handles complex backgrounds and multiple objects by training on diverse data sources. To address depth-scale ambiguity, we propose a novel camera conditioning scheme. We introduce "SDS anchoring" to enhance background diversity during distillation, achieving state-of-the-art results in zero-shot settings and on the challenging MipNeRF 360 dataset.
Project page: https://lnkd.in/gpPSBneW
Paper page: https://lnkd.in/gddDQ4gV
In chapter 6 of my YouTube series "Machine Vision for Visually Impaired People" I talk about hand detection. The Explore and Read modes of the Echobatix app use both optical character recognition (OCR) and hand detection.
Did you know that a machine learning model intended to detect a hand in an image may detect a foot instead? Fun!
For folks who have little to no vision, I use analogies in this video to explain how the size of a hand varies in an image. It's an occasion to talk about goalball!
In the next video in the series I'll explain how skin tones affect the robustness of hand detection.
Have a good weekend!
https://lnkd.in/etT64A3R
Today we're announcing Gen-3 Alpha, the newest standard for what state-of-the-art video generation looks like today and a step forward in bringing General World Models to life.
Learn more at our blog and find some early examples of raw, unedited outputs the model is capable of today.
https://lnkd.in/eRjqp34c