Skip to content
Snippets Groups Projects
Commit 09f39bd2 authored by Hoang  PHAN's avatar Hoang PHAN
Browse files

migrating documentation from github to gitlab

parent e6b9bee1
No related branches found
No related tags found
No related merge requests found
Showing
with 799 additions and 68 deletions
.hugo_build.lock
/public
\ No newline at end of file
[submodule "themes/learn"]
path = themes/learn
url = https://github.com/matcornic/hugo-theme-learn.git
# docs
# Playground documentation
The digital.auto playground documentation is realized with GitHub pages. It is generated by [HUGO](https://gohugo.io/) with [Learn Theme](https://themes.gohugo.io/hugo-theme-learn/) theme, from the markdown files in the `\docs` folder.
The static webpage is generated automatically after every PR merged to master branch, according to the configured GitHub workflow `\.github\workflows\hugo.yaml`
The official documentation page is hosted at https://docs.digital.auto
## Getting started
## Dependencies
To make it easy for you to get started with GitLab, here's a list of recommended next steps.
The static page is generated with:
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
- [HUGO](https://gohugo.io/)
- [Learn Theme](https://themes.gohugo.io/hugo-theme-learn/)
## Add your files
Please follow the [documentation](https://gohugo.io/documentation/) for installation and further questions around the framework.
Currently, the HUGO version used for generating VSS documentation is `0.115.4`,
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
## Run the documentation server locally
```
cd existing_repo
git remote add origin https://gitlab.eclipse.org/eclipse/autowrx/docs.git
git branch -M main
git push -uf origin main
```
## Integrate with your tools
- [ ] [Set up project integrations](https://gitlab.eclipse.org/eclipse/autowrx/docs/-/settings/integrations)
## Collaborate with your team
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Set auto-merge](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
## Test and Deploy
Use the built-in continuous integration in GitLab.
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing (SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
***
Once hugo is installed please follow the following steps:
# Editing this README
### Check that HUGO is working:
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thanks to [makeareadme.com](https://www.makeareadme.com/) for this template.
## Suggestions for a good README
```
hugo version
```
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
The following outcome is expected:
## Name
Choose a self-explaining name for your project.
```
Hugo Static Site Generator v0.xx.xx ...
```
## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
### Clone the submodule containing the theme
## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
Run the following git commands to init and fetch the submodules:
## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
```
git submodule init
git submodule update
```
## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
Reference: [Git Documentation](https://git-scm.com/book/en/v2/Git-Tools-Submodules).
## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
### Test locally on your server:
## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
Within the repository
## Roadmap
If you have ideas for releases in the future, it is a good idea to list them in the README.
```
hugo server -D -s ./docs
```
## Contributing
State if you are open to contributions and what your requirements are for accepting them.
Optional `-D:` include draft pages as well. Afterwards, you can access the
page under http://localhost:1313/.
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
## Contribute
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
If you want to contribute, do the following:
## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
1. Change documentation in `/docs`
## License
For open source projects, say how it is licensed.
2. Test your changes locally, as described above
## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
3. Create Pull Request for review
---
title: "{{ replace .Name "-" " " | title }}"
date: {{ .Date }}
draft: true
---
# Welcome to official digital.auto documentation
This page provides guideline to create SdV applications efficiently on playground.digital.auto
For more academic background on digitalization and AIoT, please visit https://www.digitalplaybook.org
\ No newline at end of file
---
title: "AI Document"
date: 2023-08-03T07:07:47+07:00
draft: true
---
### Phone Use Detection Documentation
1. Tensorflow.js
TensorFlow.js is the most popular Javascript library that allows developers to run machine learning and AI models in the browser or on Node.js, enabling efficient and interactive web-based AI applications. Hence, we have used this framework to deploy our model.
In this implementation, we directly use tensorflow.js library from [its official cdn url](https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@latest/dist/tf.min.js)
2. Development
Essentially, this stage comprises of 3 steps: collect data, select model, and train.
- Collect data: Primarily, we gathered data from Youtube, Kaggle for training. For test set, we took images of our team members to objectively evaluate our model’s performance.
- Select model: At the time the model was developed, YOLOv8 was the state of the art, so we used it for this task.
- Training: We used default configuration to train since the initial training results already met our expected results.
3. Deployment
Since the development environment is not the environment where the model was originally built, pre-processing and post-processing steps are needed to run it in Javascript. Below is a detailed flowchart.
![flowchart](https://bewebstudio.digitalauto.tech/data/projects/kljSBwDhZUnS/flowchart.png)
4. How to use it
Clone the source code from this repo and download the weight and place it in the same folder. Run html file with live server option to run.
content/advanced/AI_SdV_app/AIDocument/image/flowchart.png

13.5 KiB

---
title: "AI Training Process"
date: 2023-10-11T11:09:47+07:00
draft: true
---
# AI Training Process
### 1. Collect data
In ML/AI, there is an undisputable truth that: No data, no models. In fact, we not only need data, we need to have as much data as possible. So, in order to train a decent model, many sources of data have been scrawled automatically from Youtube, Kaggle, Roboflow, ...
![Data collection](https://bewebstudio.digitalauto.tech/data/projects/eVF2H08DHbMw/pic1.png)
>Suggestion: According to my knowledge, you need at least 500 samples per class to obtain good result, and that's what we recommend you to do when you train your own object detectors.
### 2. Label data with labelImg
Now that you have a pool of images, the next step is to label them. Normally, I use labelImg software as it is completely free and very easy to use.
>Tip: You can contact labelling service providers to get it done for you since this task is quite tedious and time-consuming.
### 3. Training with ultralytics library in Python
After labelling all of the images, you are going to train it on your own chosen model. There are various models on the market that are free to use, such as YOLOv5, YOLOv6, YOLOv7, YOLOv8, Faster R-CNN, EfficientDet, SSD, ...However, in this tutorial, we use YOLOv8 as it is beginner friendly yet very powerful.
![Training Process](https://bewebstudio.digitalauto.tech/data/projects/eVF2H08DHbMw/pic2.png)
>Note: Go to this site to have a better grasp of the details: https://github.com/ultralytics/ultralytics
### 4. Convert the model into Tensorflow.js format
In order to to use the originally trained model in browsers, we have to convert it into tensorflow.js format as browsers only support this format. With ultralytics library, the conversion is just a walk in the park with only one line of code needed. Here's the code to turn it into tensorflow.js format.
```python
from ultralytics import YOLO
model = YOLO("path/to/model.pt")
model.export(format="tfjs")
```
---
title: "AI SdV Application"
date: 2023-08-03T07:07:47+07:00
draft: false
---
---
title: "AI App Concept"
date: 2023-09-25T07:07:47+07:00
draft: false
weight: 1
---
### Preface
This document for AI engineer who familiar with AI application development concept.
It assumes you have basic understanding on Vehicle API concept, we provide simple explanation at this [VSS Basic Documentation](/playground/engaged/vss_basic).
The purpose of this document is to discuss AI-on-Edge, means realtime AI running directly on vehicle, not apply for AI-on-Cloud like ChatGPT...
### AI application on playground
Nowadays, AI is a hot topic. There are plenty of tools, libraries and methods to build and deploy an AI application or reuse AI service from 3rd provider. In the scope of this tutorial, we discuss about the process to build an AI application by your own, test it on **digital.auto playground**, and deploy it to **PoC HW** such as dreamKIT...
There are many ways to deploy an AI app for vehicle, the diagram below is a suggestion on how to use AI with Vehicle APIs.
![](https://bewebstudio.digitalauto.tech/data/projects/6D9qAxt57P4e/docs_ai/ai_on_pg.png)
Ideally, the vehicle application developed on digital.auto playground could be executed on edge without any modification. This is enabled by the abstracted vehicle APIs, and container technology.
The power of API abstraction gives us the freedom to implement AI a little bit different (or absolutely different) on each environment. On web, digital.auto playground, we are limited by Javascript runtime, so we should go with [TensorFlow.JS](https://www.tensorflow.org/js). TensorFlowJS, by using WASM, can access your GPU to accelerate the calculation.
For an AI vision app, most of the time input will be an image and your trained model. Then you can set the output to Vehicle API. From now on, all the vehicle app can get Vehicle value and implement their logic.
Next, moving your Vehicle App to PoC HW to testing. On this context, you need an AI service to turn image stream to API value. You can use TensorFlow again (to reuse the sample AI model) or other tools such as PyTorch. It depends on your HW environment, license, cost and plenty of other factors.
---
title: "AI with playground"
date: 2023-08-03T06:51:01+07:00
draft: false
weight: 2
---
### Introduction
AI becomes more and more popular in daily life, and
definitely it is also a trend in the automotive industry.
In this section, we will introduce how to use AI in the playground.
> **Note:** AI have a lot of different applications, and we will only introduce a simple image processing use case in this section.
>
> We assume that you already know how to create account, model and prototype in the playground. If not, please refer this [Helloworld](/engaged/helloworld.md) section.
#### 1. Go to the [playground](https://digitalauto.netlify.app/), and login with your account.
#### 2. Create a new model, if you don't have one.
#### 3. Create a new prototype
Make sure you select the model you just created.
#### 4. Config dashboard
Go to the prototype page, and click the "Code Tab" and on the right side, click "Dashboard Config" tab. Then pick a widget, and select "Driver Distraction" widget, place it on the dashboard.
![Driver-Distraction](https://bewebstudio.digitalauto.tech/data/projects/6D9qAxt57P4e/docs_ai/pick-widget-ai.png)
Then you have a "Dashboard Config" as below:
```json
[
{
"plugin": "Builtin",
"widget": "Driver-Distraction",
"options": {
"draw_boundary": true,
"set_to_api": "Vehicle.Driver.DistractionLevel"
},
"boxes": [
1,
2,
7,
6
]
}
]
```
#### 5. Use AI Widget
Switch to "Dashboard" tab, give some seconds for the widget and AI model to load, and you will see a "Driver Distraction" widget on the dashboard.
> If the browser ask you to allow the camera, please allow it. The widget need to access your camera to capture the image.
After AI model successfully loaded, you will see the result, try to use your phone, put it in your ear, AI widget can detect you are using phone or not. Base on that, it will give you a distraction level.
The distraction level is a number between 0 and 100, 0 means you are not distracted at all, 100 means you are distracted. This level will auto set to api: **"Vehicle.Driver.DistractionLevel"**.
You can call this api in your app, and do some actions base on the distraction level.
> You can tell the widget to set the distraction level to any api you want, just change the "set_to_api" option in the "Dashboard Config" tab.
>
![no phone used](https://bewebstudio.digitalauto.tech/data/projects/6D9qAxt57P4e/docs_ai/no-phone-used.png)
No phone used result
![phone used](https://bewebstudio.digitalauto.tech/data/projects/6D9qAxt57P4e/docs_ai/phone-used.png)
Phone used result
#### 6. Verify API result
Display the distraction level result in another widget.
Go back to "Dashboard Config", pick a new widget, and select "Single-API-Value" widget, place it on the dashboard.
![pick Single API Value](https://bewebstudio.digitalauto.tech/data/projects/6D9qAxt57P4e/docs_ai/single-api-widget.png)
In new widget option, change to name and API to "Distraction Level", and API to "Vehicle.Driver.DistractionLevel" as below.
```json
{
"plugin": "Builtin",
"widget": "Single-API-Widget",
"options": {
"label": "Distraction Level",
"api": "Vehicle.Driver.DistractionLevel",
"labelStyle": "color:black;font-size:20px",
"valueStyle": "color:teal;font-size:30px;",
"boxStyle": "background-color:white;"
},
"boxes": [
3
]
}
```
Go back to "Dashboard" tab, you will see the distraction level result in the new widget.
![result-on-widget](https://bewebstudio.digitalauto.tech/data/projects/6D9qAxt57P4e/docs_ai/result-on-new-widget.png)
#### 7. Write application code
Write some code to do some actions base on the distraction level. In the "Code" tab, you can write your python code in the left panel.
Try to using distraction level to control HVAC Fan.
```python
from sdv_model import Vehicle
import plugins
from browser import aio
vehicle = Vehicle()
stop = 0
full = 100
while True:
level = await vehicle.Driver.DistractionLevel.get()
if level>50:
await vehicle.Cabin.HVAC.Station.Row1.Left.FanSpeed.set(full)
await aio.sleep(3)
else:
await vehicle.Cabin.HVAC.Station.Row1.Left.FanSpeed.set(stop)
await aio.sleep(1)
```
#### 8. Add a Fan widget to the dashboard
Go back to "Dashboard Config", pick a new widget, and select "Fan-Widget", place it on the dashboard.
In widget option, mention the API you using in python code.
```json
{
"plugin": "Builtin",
"widget": "Fan-Widget",
"options": {
"api": "Vehicle.Cabin.HVAC.Station.Row1.Left.FanSpeed"
},
"boxes": [
8
]
}
```
#### 9. Run the application
Go to "Dashboard" tab, click "Run" button, and you will see the fan speed change base on the distraction level.
![fan-speed](https://bewebstudio.digitalauto.tech/data/projects/6D9qAxt57P4e/docs_ai/fan-speed.png)
You can follow this video for step by step guide:
{{< youtube R-oKt7ziy8I >}}
### How the AI widget works
The AI widget is a wrapper of the [Tensorflow.js](https://www.tensorflow.org/js) library.
#### Step 1:
We train a model with Tensorflow, and export it to a format that tensorflowjs can be used.###
#### Step 2:
We use the tensorflowjs to load the model, and run it in the browser. Capture the image continously, and run the model to get the result(distraction_level)
#### Step 3:
The widget will set distraction_level to the api, via a special mechanism provide by digital.auto so that a widget can set a value to an API. This is a builti in mechanism, and you don't need to care about it. Please refer [Create Custom Widget](/engaged/create_custom_widget.md) tutorial for more detail.
And from now on, you can use the distraction_level via API in your application code, and do some actions base on it.
![AI on playground](https://bewebstudio.digitalauto.tech/data/projects/6D9qAxt57P4e/docs_ai/AI-On-Playground.png)
### Source code for Driver Distraction Widget
You can find the source code for the **Driver Distraction Widget** [here](https://studio.digitalauto.tech/project/Xnu8FkkYq1O7)
> This is just an example, and you can use the same method to apply AI in other use cases.
>
> The are plenty of models available in the internet, and you can use them in the widget.
>
> You can also train your own model, and use it in the widget. Please refer to [Tensorflow.js](https://www.tensorflow.org/js) for more detail.
>
> You can also using another method to have an AI engine running and give the result to widget. This tutorial is just one of the way to do it.
>
### How about real vehicle envinronment?
On real vehicle, we need to consider the performance of the AI model, and the performance of the hardware.
On vehcile context, python code(or C++) keep the sample logic. And the AI model is running on a separate hardware or seperate app/service, and the result is send to the python code via API. The python code will do some actions base on the result from the API.
---
title: "AI Application with landing.ai"
date: 2023-08-03T06:50:44+07:00
draft: true
weight: 8
---
---
title: "AI Application by Tensorflow"
date: 2023-08-03T06:51:01+07:00
draft: true
weight: 9
---
---
title: "How GenAI works on playground?"
date: 2023-09-25T07:07:47+07:00
draft: false
weight: 3
---
### 1. GenAI on playground.digital.auto
Developing SDV prototypes is not easy, as it requires knowledge and skills in different areas. But Generative AI can make it easier by taking care of some tasks, so developers can focus more on creativity. Generative AI is also part of the playground, which makes it more user-friendly for newcomers who may struggle with writing their first Python code, creating or choosing the right widget, or putting everything together to tell a story.
This diagram below will help you get started with your generative AI on the playground
![](https://bewebstudio.digitalauto.tech/data/projects/nTcRsgxcDWgr/Arch.png)
The playground does not have any generative AI embedded in it. Instead, the playground functions as a bridge between developers and the LLMs hosted service.
**LLM Hosted Services**
Developers have the flexibility to utilize LLMs hosted services from various providers such as Microsoft Azure AI, Amazon Bedrock, or their own server infrastructure. Within these environments, developers are free to experiment, train, fine-tune, or instruct LLMs to align with the playground's SDV Code (Python), Widget (HTML/CSS/JS), and Dashboard configuration (JSON).
**Submit GenAI on the marketplace**
Once developers have identified the most suitable model for their GenAI category, they can proceed to submit their LLMs to marketplace.digital.auto, providing deployment information including the endpoint URL, access key, and secret key.
**GenAI on the playground**
Following approval from the marketplace admin, the GenAI becomes accessible on the playground under the corresponding GenAI category: SDV ProtoPilot, Dashboard ProtoPilot, and WidgetProtoPilot. End-users can then utilize the developer's GenAI to assist them in SDV prototyping seamlessly.
**End-User Interaction**
End users on the playground interact with the Generative AI by sending prompts through the provided deployment URL and credentials. The responses from the hosted LLM services are then rendered as outputs within the playground.
### 2. Examples
Before we dive into specific examples, let's briefly explore how developers interact with models within the LLMs hosted services. This process involves training, fine-tuning, and instructing models using system messages, enabling developers to tailor them to their specific needs.
In the LLMs hosted services environment, developers have the flexibility to train, fine-tune, and instruct models through system messages. This approach allows models to learn and adapt to various tasks or scenarios based on the provided instructions and data.
Now, let's demonstrate the instruction model using system messages within the Amazon Bedrock and Microsoft Azure environments where developers can experiment with LLMs models.
**Microsoft Azure AI**
![](https://bewebstudio.digitalauto.tech/data/projects/nTcRsgxcDWgr/azure.png)
**Amazon Bedrock**
![](https://bewebstudio.digitalauto.tech/data/projects/nTcRsgxcDWgr/bedrock.png)
After experimenting with both LLMs models, the following system message can efficiently generate simple widgets:
````
You are expert web developer proficient in Tailwind, your task involves widget creation and code development.
Coding Requirements:
- RETURN THE FULL CODE
- DO NOT ADD COMMENT AND YAPPING
- Do not add comments in the code such as "<!-- Add other navigation links as needed -->" and "<!-- ... other news items ... -->" or in place of writing the full code. WRITE THE FULL CODE.
- Repeat elements as needed. For example, if there are 15 items, the code should have 15 items. DO NOT LEAVE comments like "<!-- Repeat for each news item -->" or bad things will happen.
- To integrate vehicle API, you MUST integrate the vehicle APIs with the following script structure within the code, below is the example:
- User only give the prompt without specific ASSOCIATED_API so you will use these API for each scenarios:
- If related to open/close driver door (value range: True/False, type: actuator): "Vehicle.Cabin.Door.Row1.Left.IsOpen"
- If related to fan/hvac (value range: 0-100, type: actuator): "Vehicle.Cabin.HVAC.Station.Row1.Left.FanSpeed"
- If related to set temperature of fan/hvac (value range: 16-30, type: actuator): "Vehicle.Cabin.HVAC.Station.Row1.Left.Temperature"
- If related to open/close trunk (value range: True/False, type: actuator): "Vehicle.Body.Trunk.Rear.IsOpen"
- If related to adjust driver seat position (value range: 0-10, type: actuator): "Vehicle.Cabin.Seat.Row1.Pos1.Position"
- If related to turn on/off the low beam/light (value: True/False, type: actuator): "Vehicle.Body.Lights.IsLowBeamOn"
- Example GET API Value:
<script>
let ASSOCIATED_API = "Vehicle.Cabin.HVAC.Station.Row1.Left.FanSpeed"
let interval = null
function onWidgetLoaded(options) {
let speedValue = document.getElementById("speedValue")
let fan_icon = document.querySelector('.fan_icon')
interval = setInterval(() => {
if(speedValue) {
let apiValueObject = getApiValue(ASSOCIATED_API)
let apiValue = apiValueObject?.value || 0
speedValue.innerText = apiValue
let duration = (-0.02 * apiValue) + 2.02
if(!apiValue || apiValue === 0) {
fan_icon.style.animationDuration = "0s"
}
if(apiValue || apiValue != 0){
fan_icon.style.animationDuration = `${duration}s`
}
}
}, 500) // Get API value every 0.5s
}
function onWidgetUnloaded(options) {
if(interval) clearInterval(interval)
}
</script>
- Example SET API Value:
<script>
let ASSOCIATED_API = "Vehicle.Exterior.AirTemperature"
// elsewhere in the code set the temperature to associated_api
setApiValue(ASSOCIATED_API, currentData.current.temp_c);
</script>
- The syncer will pass the sync the value between simulator and widget
Widget Development:
- Construct widgets that represent various car features through associated APIs.
- Widgets should primarily use a dark mode theme, suitable for car in-vehicle infotainment (IVI) systems. Please use the Tailwind Neutral color palette (neutral 800-900-950)
- Example Widgets: A fan widget displaying fan speed, and a speedometer widget showing vehicle speed.
- Ease of Development: Design widgets that enable software engineers with minimal web development experience to easily create and integrate them with vehicle APIs.
- This widget should be built using HTML, Tailwind CSS, and JavaScript, and must be fully responsive and centered to occupy the given space.
Libraries and Resources:
- Use this script to include Tailwind: <script src="https://cdn.tailwindcss.com"></script>
- You can use Google Fonts
- Font Awesome for icons: <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.3/css/all.min.css"></link>
- Add the simulator via <script defer src="https://bestudio.digitalauto.tech/project/BzR91b49OHqj/syncer.js"></script>
Output Format:
- Return only the full code in <html></html> tags.
- Do not include markdown "```" or "```html" at the start or end.
````
##### With the prompt
```
Create the fan widget that have a fan spinning
```
#### 2.1 Amazon Bedrock Titan Express G1 Example
**Image of the widget generated by Amazon Bedrock Titan Express G1**
![](https://bewebstudio.digitalauto.tech/data/projects/nTcRsgxcDWgr/bedrockGEN.png)
```
<html>
<head>
<script src="https://cdn.tailwindcss.com"></script>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.3/css/all.min.css">
<script defer src="https://bestudio.digitalauto.tech/project/BzR91b49OHqj/syncer.js"></script>
<style>
.fan_icon {
animation: rotate infinite linear;
}
@keyframes rotate {
from {
transform: rotate(0deg);
}
to {
transform: rotate(360deg);
}
}
</style>
</head>
<body class="bg-neutral-900 text-white flex items-center justify-center h-screen">
<div class="w-64 h-64 bg-neutral-800 rounded-lg shadow-lg p-6 flex flex-col items-center justify-center">
<i class="fas fa-fan fan_icon text-6xl mb-4"></i>
<div class="text-4xl font-bold" id="speedValue">0</div>
</div>
<script>
let ASSOCIATED_API = "Vehicle.Cabin.HVAC.Station.Row1.Left.FanSpeed"
let interval = null
function onWidgetLoaded(options) {
let speedValue = document.getElementById("speedValue")
let fan_icon = document.querySelector('.fan_icon')
interval = setInterval(() => {
if(speedValue) {
let apiValueObject = getApiValue(ASSOCIATED_API)
let apiValue = apiValueObject?.value || 0
speedValue.innerText = apiValue
let duration = (-0.02 * apiValue) + 2.02
if(!apiValue || apiValue === 0) {
fan_icon.style.animationDuration = "0s"
}
if(apiValue || apiValue != 0){
fan_icon.style.animationDuration = `${duration}s`
}
}
}, 500) // Get API value every 0.5s
}
function onWidgetUnloaded(options) {
if(interval) clearInterval(interval)
}
</script>
</body>
</html>
```
#### 2.2 Microsoft Azure AI GPT-4 Example
**Image of the widget generated by Azure AI GPT-4**
![](https://bewebstudio.digitalauto.tech/data/projects/nTcRsgxcDWgr/azureGEN.png)
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Fan Widget</title>
<script src="https://cdn.tailwindcss.com"></script>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.3/css/all.min.css"></link>
<script defer src="https://bestudio.digitalauto.tech/project/BzR91b49OHqj/syncer.js"></script>
<style>
.fan-icon-spin {
animation: spin infinite linear;
}
@keyframes spin {
0% {
transform: rotate(0deg);
}
100% {
transform: rotate(360deg);
}
}
</style>
</head>
<body class="bg-neutral-900 min-h-screen flex items-center justify-center">
<div id="fanWidget" class="flex flex-col items-center justify-center text-white">
<div class="fan_icon w-24 h-24 text-neutral-100">
<i class="fas fa-fan fa-5x"></i>
</div>
<div class="mt-4">
<span id="speedValue" class="text-4xl font-semibold">0</span> <span class="text-xl">RPM</span>
</div>
</div>
<script>
let ASSOCIATED_API = "Vehicle.Cabin.HVAC.Station.Row1.Left.FanSpeed"
let interval = null
function onWidgetLoaded(options) {
let speedValue = document.getElementById("speedValue")
let fanIcon = document.querySelector('.fan_icon i')
interval = setInterval(() => {
let apiValueObject = getApiValue(ASSOCIATED_API)
let apiValue = apiValueObject?.value || 0
speedValue.innerText = apiValue
if(apiValue || apiValue !== 0) {
fanIcon.classList.add('fan-icon-spin')
let duration = 2 - (apiValue / 50) // Adjust fan speed animation based on API value
fanIcon.style.animationDuration = `${duration}s`
} else {
fanIcon.classList.remove('fan-icon-spin')
}
}, 500)
}
function onWidgetUnloaded(options) {
if(interval) clearInterval(interval)
}
// This function is called on window load to start the widget
window.onload = function () {
onWidgetLoaded();
};
// This function is called before the window is closed/unloaded
window.onbeforeunload = function () {
onWidgetUnloaded();
};
</script>
</body>
</html>
```
+++
title = "Advanced"
date = 2023-08-01T07:04:25+07:00
weight = 8
chapter = true
pre = "<b>4. </b>"
+++
### Chapter 4
# Advanced
Now let's do the complex things.
---
title: "Architecture"
date: 2023-08-03T06:48:57+07:00
draft: false
weight: 1
---
## Getting started
Please have a look at image below.
![architecture-from-playground-to-dreamKIT](https://bewebstudio.digitalauto.tech/data/projects/nTcRsgxcDWgr/Architecture/architecture-from-playground-to-dreamKIT-2.png)
This architecture has 2 parts:
- **(1) Playground general architecture**
- **(2) Architecture and flow from Playground to dreamKIT**
This page is focused on **(1) Playground general architecture**. For more information about **(2) Architecture and flow from Playground to dreamKIT**, please refer [Playground to dreamKIT](https://docs.digital.auto/dreamkit/working/deployment/).
## Playground general architecture
![general-architecture](https://bewebstudio.digitalauto.tech/data/projects/nTcRsgxcDWgr/Architecture/general-architecture-2.png)
The playground is a cloud-based web application that is responsible for rapidly prototyping environment for new, SDV-enabled features.
To bring SDV-vehicle development experience to website, we are currently using these technologies and tools on playground:
**Front-end:**
- React: Front-end library
- TailwindCSS: CSS framework
**Back-end:**
- Netlify: Utilize for server-side functions such as authentication and permissions
- Firebase Firestore: Database
**Other:**
- Brython: Allow to run Python code in browser environment
- Socket.IO: Real-time bidirectional communication
- Web Assembly: Execute high performance, low level code in browser
- COVESA VSS: Syntax and catalog for vehicle signal
- Velocitas: Toolchain to create containerized in-vehicle application
## Dive deeper into playground
Please have a glance at below picture. This picture describes components and how things work in the Playground dashboard.
![playground-dashboard](https://bewebstudio.digitalauto.tech/data/projects/nTcRsgxcDWgr/Architecture/playground-dashboard-2.png)
Before coming to what a Playground dashboard is, let's take a look at some of the components in the image above:
- **VSS-API**: APIs that adhere to the format of COVESA Vihicle Signal Specification. You can also create your custom APIs.
- **Simulator**: Provide simulation for VSS-APIs. This is written in Python and later translated to Javascript code to execute within browser environment. More information please refer [How Python-Javascript works](https://docs.digital.auto/advanced/how-python-javascript-works/)
- **code.py**: Python script responsible for interacting with the VSS-API and handle associated logics.
- **Widget**: UI apps that fetch data from VSS-API and display them. There are 2 types of Widgets:
- The built-in widgets
- Custom widgets: These are managed and published at [marketplace](marketplace.digitalauto.tech)
- **AI Engine**: 3rd-party services such as LandingAI
**So what is Playground dashboard?**
| Dashboard Diagram | Dashboard Config on Playground | Actual Dashboard on Playground |
| :----------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------: |
| ![dashboard-diagram](https://bewebstudio.digitalauto.tech/data/projects/nTcRsgxcDWgr/Architecture/dashboard-2.png) | ![dashboard-config](https://bewebstudio.digitalauto.tech/data/projects/nTcRsgxcDWgr/Architecture/dashboard-config-2.png) | ![dashboard-actual](https://bewebstudio.digitalauto.tech/data/projects/nTcRsgxcDWgr/Architecture/dashboard-actual.png) |
Playground dashboard is where you can place your Widgets. Dashboard has 10 tiles. A widget can be placed on one or many tiles. You also have options to config the widget, such as which APIs this widget are interacting with.
## The server migration
![server-migration](https://bewebstudio.digitalauto.tech/data/projects/nTcRsgxcDWgr/Architecture/server-migration.png)
In aforementioned architecture, 3rd-party services (Firebase and Netlify) enhance the speed of development. However, as the application scales, this part of system transitions from a facilitator to a burden, making optimization more challenging and introducing additional complexities to the development process. That is why we are in progress to migrate these serverless platform to our self-managed server (using NodeJS and MongoDB)
---
title: "Customization"
date: 2023-08-03T07:06:32+07:00
draft: true
---
---
title: "Simulator"
date: 2023-08-03T06:52:15+07:00
draft: true
weight: 7
---
---
title: "Widget"
date: 2023-08-03T06:52:09+07:00
draft: true
weight: 6
---
---
title: "How Python-Javascript works"
date: 2023-08-03T06:48:57+07:00
draft: false
weight: 1
---
## Prerequisite
To understand how this Python-Javascript works, you need some foundation knowledge of HTML, Javascript, and Python.
You also need to read [Create 'Hello World' Prototype](https://docs.digital.auto/advanced/how-python-javascript-works/) guide or at least have some experience with prototype on Playground before reading this documentation.
## Similarity of Python code on Playground
No matter what prototype and model is being used, every Python code on Playground share the same structure.
They always contain this line of code:
```python
from sdv_model import Vehicle
```
Below is a snapshot of Python code on Playground. In this Python code, you can clearly see a `Vehicle` class is imported from `sdv_model` module.
![vehicle-class-imported-from-sdv_model](https://bewebstudio.digitalauto.tech/data/projects/nTcRsgxcDWgr/How%20Python-Javascript%20works/vehicle-class-imported-from-sdv_model.png)
This `Vehicle` class serves as a core component. It is responsible for simulating all APIs within prototype and facilitating every APIs call in the code. Understanding the implementation of the Vehicle class is a must for gaining insight into how Python-Javascript works.
**Vehicle class implementation**
- This class is written in Python.
- Vehicle class is capable of:
- Recognizing and interacting with APIs in the prototype (APIs can be VSS API or Custom/Wishlist APIs).
- Simulating of all states and values associated with the APIs.
- Facilitating of API calls, including get, set, and subscribe operations.
- Validating of data types passed to API calls.
- Later the `Vehicle` class is converted to Javascript code using [Brython](https://brython.info/) library. Then it is saved as a file at [https://digitalauto.netlify.app/brython/sdv_model.brython.js](https://digitalauto.netlify.app/brython/sdv_model.brython.js)
The below picture depicts the interaction between Python code (code.py) and the `Vehicle` class (simulator)
![playground-dashboard](https://bewebstudio.digitalauto.tech/data/projects/nTcRsgxcDWgr/How%20Python-Javascript%20works/playground-dashboard-interaction.png)
## How Python code on playground is executed
Within a prototype, proceed to the Dashboard tab and click the Run button. This action triggers the execution of the Python code written in the Code tab. The Widgets should then be able to detect the changes made by your Python code to the APIs. Again, please refer [Create 'Hello World' Prototype](https://docs.digital.auto/advanced/how-python-javascript-works/) guide to understand what we are doing.
![run-dashboard](https://bewebstudio.digitalauto.tech/data/projects/nTcRsgxcDWgr/How%20Python-Javascript%20works/run-dashboard.png)
The Python code itself cannot directly run on browser. Thus, a series of steps must be undertaken to enable its execution. For more detail, the following approach is adopted:
- First, an iframe was created. This iframe contains all of these things:
- The Python code in the Code tab
- The Brython scripts are embedded through CDN. It allows us to run Python code in browser:
```javascript
<script referrerpolicy="origin" src="https://cdnjs.cloudflare.com/ajax/libs/brython/3.10.5/brython.js"></script>
<script referrerpolicy="origin" src="https://cdnjs.cloudflare.com/ajax/libs/brython/3.10.5/brython.js"></script>
```
- Finally, the aforementioned `Vehicle` class:
```javascript
<script referrerpolicy="origin" src="https://digitalauto.netlify.app/brython/sdv_model.brython.js"></script>
```
- After that, utilize [Brython](https://brython.info/) library, the Python code in the Code tab will be converted to Javascript.
- Finally, execute Javascript code using `eval` function.
**Why we need to create a separate iframe?**
- Due to the utilization of the `eval` function for execution, the code must be within a separate iframe to prevent the potential risks of dangerous or destructive actions.
- The Brython library and the Javascript itself don't have a native mechanism to stop the running code (in this case it's a Javascript Promise), for example, a infinite loop code block. Put it in a separate iframe will make it a lot easier to stop it when required.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment