I’m Junping Li, MS in Marine Science at Shanghai Jiao Tong University and BE in Automation (EECS) at Ocean University of China. I studied training and application of large models for language, multimodal and agent at AI Institute, Shanghai Jiao Tong University, and cybernetics & learning of cross environment vehicle (CEV) or hybrid aerial underwater vehicle (HAUV), such as factor, condition, control strategy and deep reinforcement learning, at Shanghai Jiao Tong University, advised by Prof. Zheng Zeng. During my university years, I was awarded Outstanding Student, Outstanding Graduate, Academic Excellence Scholarship and Practice Scholarship.
My background covers cybernetics, system and control theory, machine learning, large models, vehicle decision planning, and robotics, I learn and like to learn knowledge, causal inference, cognitive science, game theory and others. Due to cybernetics and AI, I develop interested in cognition, such as mind, language, behavior, society, and I think a few questions about: human non Bayesian or non scientific cognition but enough natural/effective, rule learning & cognition that we in most cases are based on rules and fuzzy logic, with explainable expression, abstract physiology and network compatibility, and cognition embedding/empowerment for something. I hope to combine cybernetics, AI and psychology to propose new ideas, theories and works, and they can be applied to the development of human, society and robotic systems, and make contributions to the world we live in.
Training and Application of Language and Multimodal Large Model and Agent
AI Institute, Shanghai Jiao Tong University, and Institute of Automation, Chinese Academy of Sciences
Trained tokenizer, pre training and supervised fine tuning (SFT) for large language model; Based on Qwen and SigLIP, pre training to SFT for multimodal large model; Model inference, established the knowledge base and conducted the retrieval-augmented generation; Built the agents by models and function calling.
Nonlinear Control and Deep Reinforcement Learning of CEV/HAUV
Junping Li, H Zhou, D Lu, et al. Nonlinear and reinforcement learning control for motion of hybrid aerial underwater vehicle. Neural Computing and Applications, 2024.
Proposed a 3-D space cross model; Key issues: uncertainty, cross environment, constraint of environment difference; Nonlinear control laws with robustness, adaptation and fuzzy logic; Deep reinforcement learning of CEV by deterministic policy, neural networks and temporal difference learning; Various methods in the tracking cases of issues.
Cross Domain Strategy, Factors and Conditions
Junping Li, Y Jin, R Hu, et al. Trajectory tracking control of hybrid aerial underwater vehicle subject to wind and wave disturbances. Journal of Intelligent & Robotic Systems, 2024.
Proposed a strategy to address the control convergence problem caused by the large change of environment transition; Key factors and conditions of the cross environment in the various scenarios with multiple variables; Critical relations and feasible domains of the factors that control conditions need to meet.
Phenomena and Mechanisms of CE with Experiments and Learning
T Wei, Junping Li, Z Zeng, et al. Trans-media resistance investigation of hybrid aerial underwater vehicle base on hydrodynamic experiments and machine learning. Ocean Engineering, 2022.
Built the experiment platform, invention patent CN202110217870.4; Operated the cross environment experiments of CEV with various states; Obtained the key mechanisms and coefficients by multivariate analysis and neural networks.
Neural Computing and Applications, ICRA, IROS
Email: ljp.id [at] sjtu.edu.cn
Web: junpingli.com
Address: Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240