We are ready
We can help make your business smarter.
Our Values
Firstly, we thrive in the company of dedicated individuals who bring their passion to work, serving as catalysts for AR24.7’s growth through their unwavering commitment.
Secondly, a culture of continuous learning and effective communication is ingrained in every member of AR24.7. Regardless of their expertise, they proactively gather diverse perspectives and engage in transparent and purposeful dialogue to uncover optimal solutions.
We invest in only the best Technology.
we develop small to medium size robot vehicle that operates fully autnomous.
Our goal is to develop robots that can help people and make every day life more easier and convenient.
Find Your Team
Introduce
Motion planning and control are two crucial aspects of autonomous systems such as robots and self-driving vehicles. Motion planning involves determining optimal paths for robots or vehicles to navigate from their current location to a target destination, while considering obstacles and ensuring safety.
Responsibilites
In the mobile robot part, motion control and path planning
Qualifications
- 2+ years experience in related fields
- Proficiency in C/C++ programming languages.
- Experience in development with Linux systems.
- Experience with ROS (Robot Operating System)
Preferred Qualifications
- Strong mathematics (geometry, optimization), physics (kinematics, dynamics), and analytical skills
- Experience in developing path planning & motion control (A*, sampling-based motion planners, optimization-based motion planners / LQR, MPC, NMPC, OBCA).
- Engleish communication skill
Introduce
The AI team at AR24.7 is responsible for developing and implementing artificial intelligence algorithms and models to enhance perception, decision-making, and scene understanding in self-driving systems. They work on tasks such as vision sensor data processing, object detection and tracking, and scenario comprehension using deep learning technologies. The primary objective of the AI team is to improve the robot’s ability to comprehend and react to its environment, ensuring safe and efficient autonomous driving.
Responsibilites
- Create semantic and instance segmentation models to enhance the robot’s understanding of its surroundings.
- Implement deep learning-based solutions for various robot tasks.
- Optimize lightweight deep learning models for deployment on mobile robot platforms.
- Develop multitask deep learning models for robot perceptions, including object detection and segmentation.
- Extract object information from various sensors such as cameras, LiDAR, depth cameras, and 3D point clouds.
- Develop object classification and tracking algorithms to detect objects in the scene.
Qualifications
- Understanding and experience with model optimization techniques.
- Proficiency in Python and C/C++ programming languages.
- Strong programming skills and experience.
- Experience with ROS (Robot Operating System) or other robot frameworks.
- Understanding of the characteristics of specific vision sensors such as cameras, radar, LiDAR, and depth cameras.
- Strong communication skills and a collaborative mindset
Preferred Qualifications
- Strong publications in top journals and conferences in the computer vision field.
- Experience working with various sensors.
- Deep understanding of camera geometry.
- Experience in camera-lidar sensor fusion.
- Previous involvement in autonomous driving-related projects or work.
- Expertise in lightweighting and optimization of deep learning models.
- Experience participating in or developing projects related to ADAS or autonomous driving.
- Experience in development with Linux/Embedded systems.
Introduce
The SLAM team in mapping is responsible for developing and implementing Simultaneous Localization and Mapping algorithms, which enable autonomous systems to simultaneously construct a map of the environment and localize themselves within it. They work on tasks such as sensor fusion, feature extraction, loop closure detection, and map optimization, ensuring accurate and up-to-date maps for navigation and localization purposes in autonomous systems.
Responsibilites
- 3D LiDAR-based SLAM
- Visual SLAM
- Map merge & map streaming
- Robot localization
- Camera and Lidar sensor Fusion
- Map Coordinate Transformation
Qualifications
- Strong understanding and experience in linear algebra: matrices, vectors, numerical solvers, and related algorithms.
- Strong understanding and experience in SLAM.
- Proficiency in C/C++/python programming languages.
- Experience with ROS (Robot Operating System) or other robot frameworks.
Preferred Qualifications
- Experincen in working with optimizing graph such as: G2O, GTSAM
- Experience in development with Linux/Embedded systems.
- Experience in working with various sensors.
Introduce
Introduction: The AI team at AR24.7 is responsible for developing and implementing artificial intelligence algorithms and models that enhance perception, decision-making, and scene understanding in self-driving systems. They undertake tasks such as processing vision sensor data, detecting and tracking objects, and utilizing deep learning technologies for scenario comprehension. The primary goal of the AI team is to enhance the robot’s ability to understand and respond to its surroundings, ensuring safe and efficient autonomous driving.
Responsibilites
- Develop sensor fusion approaches for 3D multiple object detection and tracking using cameras and sensors.
- Perform multi-sensor calibration and fusion, incorporating cameras, ultrasound, and LiDARs.
- Implement sensor fusion algorithms on mobile platforms for effective object tracking.
Qualifications
- Proficiency in the development, theory, design, modeling, and implementation of sensor fusion algorithms.
- Deep understanding of kinetic motion models.
- Experience and familiarity with estimation filters such as KF, EKF, UKF.
- Proficiency in Python and C/C++ programming languages.
- Strong programming skills.
- Experience with ROS (Robot Operating System) or other robot frameworks.
- Understanding of the characteristics of specific vision sensors like cameras, radar, LiDAR, and depth cameras.
Preferred Qualifications
- Experience in camera-lidar sensor fusion.
- Familiarity with working with various sensors.
- Previous involvement in autonomous driving-related projects or work.
- Expertise in real-time multi-object tracking.
- Experience in participating in or developing projects related to ADAS or autonomous driving.
- Proficiency in development with Linux/Embedded systems.