A Gradio-based demonstration for the Microsoft Fara-7B model, designed as a computer use agent. Users upload UI screenshots (e.g., desktop or app interfaces), provide task instructions (e.g., "Click on the search bar"), and receive parsed actions (clicks, types) with visualized indicators (circles and labels) overlaid on the image. Supports JSON-formatted tool calls for precise coordinate-based interactions.
Demo: https://huggingface.co/spaces/prithivMLmods/CUA-GUI-Operator
- UI Image Processing: Upload screenshots; model analyzes and suggests actions like clicks or text input at specific coordinates.
- Task-Driven Inference: Natural language instructions generate structured JSON actions (e.g., {"action": "click", "coordinate": [400, 300]}).
- Action Visualization: Overlays red circles for clicks and blue for others, with labels (e.g., "Click" or "Type: 'Hello'") on the output image.
- Response Parsing: Extracts tool calls from model output using regex; handles multiple actions per task.
- Custom Theme: OrangeRedTheme with gradients for an intuitive interface.
- Examples Integration: Pre-loaded samples for quick testing (e.g., Windows start menu, search box).
- Queueing Support: Handles up to 50 concurrent inferences for efficient use.
- Error Resilience: Fallbacks for model loading failures or invalid inputs; console logging for debugging.
- Python 3.10 or higher.
- CUDA-compatible GPU (recommended for float16; falls back to CPU).
- Git for cloning dependencies.
- Hugging Face account (optional, for model caching via
huggingface_hub).
-
Clone the repository:
git clone https://github.com/PRITHIVSAKTHIUR/Fara-7B-Action-Points-Demo.git cd Fara-7B-Action-Points-Demo -
Install dependencies: Create a
requirements.txtfile with the following content, then run:pip install -r requirements.txtrequirements.txt content:
transformers==4.57.1 webdriver-manager huggingface_hub python-dotenv sentencepiece qwen-vl-utils gradio_modal torchvision matplotlib accelerate num2words pydantic requests pillow openai spaces einops torch peft -
Start the application:
python app.pyThe demo launches at
http://localhost:7860(or the provided URL if using Spaces).
-
Upload Image: Provide a UI screenshot (e.g., PNG of a desktop or app window).
-
Enter Task: Describe the action in the textbox (e.g., "Click on the start menu" or "Type 'Hello World' in the search box").
-
Execute: Click "Execute Agent" to run inference.
-
View Results:
- Text: Raw model response with parsed JSON actions.
- Image: Annotated screenshot showing action points (circles with labels).
- Upload a Windows desktop image.
- Task: "Click on the start menu."
- Output: Response with click action at coordinates; image with red circle labeled "Click" on the start button.
- Model Loading Errors: Ensure transformers 4.57.1; check CUDA with
torch.cuda.is_available(). Usetorch.float32if float16 OOM occurs. - No Actions Parsed: Verify task clarity; raw output logged in console. Adjust max_new_tokens if truncated.
- Visualization Issues: PIL font errors fallback to default; ensure images are RGB.
- Queue Full: Increase
max_sizeindemo.queue()for higher traffic. - Vision Utils: Install
qwen-vl-utilsfor image processing; test with examples. - UI Rendering: Set
ssr_mode=Trueif gradients fail; check CSS for custom styles.
Contributions encouraged! Fork, create a feature branch (e.g., for multi-step tasks), and submit PRs with tests. Focus areas:
- Support for video inputs or real-time GUI control.
- Additional action types (e.g., scroll, drag).
- Integration with browser automation.
Repository: https://github.com/PRITHIVSAKTHIUR/Fara-7B-Action-Points-Demo.git
Apache License 2.0. See LICENSE for details.
Built by Prithiv Sakthi. Report issues via the repository.
