Модель:
xceptor/MegaBeam-Mistral-7B-300k-gguf · Hugging Face ТЗ на сайте Москвы:
Сервис прогнозирования возникновения аварийных ситуаций
Московский инновационный кластер (i.moscow) Откорректировал формулировку:
Напиши код всех необходимых программ для реализации проекта "Сервис прогнозирования возникновения аварийных ситуаций".
База данных - Постгрес.
Создай запросы к базе данных на создание базы данных и работу с ней.
Тебе необходимо отчитаться перед комиссией кодом всех программ.
Актуальность
В настоящее время отсутствует возможность автоматического расчета оптимального сценария реагирования служб коммунального хозяйства в случаях возникновения инцидентов на объектах городского хозяйства. Анализ массива разрозненных данных о состоянии объектов жилищно-коммунального хозяйства позволит выявить проблемные объекты и оперативно устранить инциденты в случае их возникновения.
В целях снижения трудозатрат на ручной анализ и оптимизации процесса реагирования на возникшие ситуации предлагается реализовать автоматизированный сервис с использованием алгоритмов машинного обучения.
Описание задачи
Разработайте сервис, который позволит спрогнозировать возникновение аварийных ситуаций на системах жизнеобеспечения, в многоквартирных домах и социальных объектах, а также позволит сформировать оптимальные сценарии реагирования служб городского хозяйства.
Ресурсы
Обезличенный набор данных, включающий: Данные по инцидентам (включая время их устранения); Данные по обращениям граждан из информационной системы «Единый диспетчерский центр» и с портала «Наш город»; Сведения по общедомовому потреблению ресурсов Кроме того, из открытых источников могут быть взяты сведения о погодных условиях, характеристиках зданий, работах по капитальному ремонту, проведенных в МКД, нормативах температуры в жилом помещении т.д.
Описание итогового продукта
Веб-сервис или микросервис, реализующий ml-алгоритм
Рекомендуемые навыки:
#frontend, #backend, #datascience, #gisanalyst, #geoanalyst, #dataanalyst, #ml, #database #engineering
Перевёл на английский язык и вставил в LM Studio.
Настройки:
Результат:
1. We will create the database and tables for storing data on buildings, their state and incidents.
2. We will implement the collection of data from external sources.
3. We will implement the implementation of ML algorithms to predict the occurrence of incidents.
4. Create a user interface (web or mobile) for interacting with the service and visualizing the results of predictive analysis.
5. Implement the generation of optimal response scenarios for municipal services based on the results of predictive analysis.
Technical requirements:
- Language: Python
- Frameworks: Flask, Django, FastAPI
- Databases: PostgreSQL
- ML algorithms: Naïve Bayes, Decision Trees, Random Forests, Gradient Boosting Machines, Deep Learning, etc.
- Frontend: ReactJS, AngularJS, VueJS
- Backend: Node.js, Python, Ruby
- REST API: FastAPI, Flask Restful, Django REST Framework
- Data Science: Pandas, NumPy, Matplotlib, Seaborn, Scikit-Learn
### Database Design
We will create a database with the following tables and columns for them:
1. **buildings** (id, address, date_of_commissioning, general_house_consumption, number_of_floors, number_of_residents, number_of_rooms, area_in_square_meters, state, geo_coordinates, etc.)
2. **incidents** (id, date_and_time, building_id, description, status, etc.)
3. **citizen_appeals** (id, date_and_time, text, category, city_id, etc.)
4. **weather** (id, date_and_time, temperature, precipitation, wind, humidity, etc.)
5. **repairs_history** (id, date_of_commissioning, building_id, type, start_date, end_date, etc.)
6. **facilities** (id, type, address, date_of_commissioning, number_of_floors, number_of_residents, number_of_rooms, area_in_square_meters, state, geo_coordinates, etc.)
7. **weather_history** (id, date_of_commissioning, weather_id, type, start_date, end_date, etc.)
8. **citizen_appeals_status** (id, citizen_appeal_id, status, date_and_time, etc.)
### Implementation Plan
1. Create the database and tables for storing data on buildings, their state and incidents.
2. Implement the collection of data from external sources.
3. Implement the implementation of ML algorithms to predict the occurrence of incidents.
4. Visualize the results of predictive analysis.
5. Implement the generation of optimal response scenarios for municipal services based on the results of predictive analysis.
6. Develop a user interface (web or mobile) for interacting with the service and visualizing the results of predictive analysis.
### Database creation and management
We will create a database using PostgreSQL and manage it using pgAdmin.
To create the database, we will execute the following command in the terminal:
```
createdb emergency_situations;
```
Then, we will create tables for storing data on buildings, their state and incidents using the following SQL queries:
```sql
CREATE TABLE buildings (
id INTEGER PRIMARY KEY,
address VARCHAR(255),
date_of_commissioning DATE,
general_house_consumption FLOAT,
number_of_floors INTEGER,
number_of_residents INTEGER,
number_of_rooms INTEGER,
area_in_square_meters FLOAT,
state VARCHAR(50),
geo_coordinates JSONB,
etc.
);
CREATE TABLE incidents (
id INTEGER PRIMARY KEY,
date_and_time TIMESTAMP,
building_id INTEGER,
description VARCHAR(255),
status VARCHAR(50),
etc.
);
CREATE TABLE citizen_appeals (
id INTEGER PRIMARY KEY,
date_and_time TIMESTAMP,
text VARCHAR(255),
category VARCHAR(50),
city_id INTEGER,
etc.
);
CREATE TABLE weather (
id INTEGER PRIMARY KEY,
date_and_time TIMESTAMP,
temperature FLOAT,
precipitation FLOAT,
wind INTEGER,
humidity INTEGER,
etc.
);
CREATE TABLE repairs_history (
id INTEGER PRIMARY KEY,
date_of_commissioning DATE,
building_id INTEGER,
type VARCHAR(50),
start_date DATE,
end_date DATE,
etc.
);
CREATE TABLE facilities (
id INTEGER PRIMARY KEY,
type VARCHAR(50),
address VARCHAR(255),
date_of_commissioning DATE,
number_of_floors INTEGER,
number_of_residents INTEGER,
number_of_rooms INTEGER,
area_in_square_meters FLOAT,
state VARCHAR(50),
geo_coordinates JSONB,
etc.
);
CREATE TABLE weather_history (
id INTEGER PRIMARY KEY,
date_of_commissioning DATE,
weather_id INTEGER,
type VARCHAR(50),
start_date DATE,
end_date DATE,
etc.
);
CREATE TABLE citizen_appeals_status (
id INTEGER PRIMARY KEY,
citizen_appeal_id INTEGER,
status VARCHAR(50),
date_and_time TIMESTAMP,
etc.
);
```
### Data Collection
We will implement the collection of data from external sources using the following Python scripts:
1. **citizen_appeals_collector.py** - this script will collect data on citizen appeals from the "Unified Dispatch Center" information system and from the "Our City" portal.
2. **weather_collector.py** - this script will collect data on weather conditions from open sources.
3. **buildings_state_collector.py** - this script will collect data on the state of buildings from the city database.
4. **repairs_history_collector.py** - this script will collect data on repairs carried out in apartment buildings from the city database.
### ML algorithms implementation
We will implement the implementation of ML algorithms to predict the occurrence of incidents using the following Python scripts:
1. **incidents_prediction.py** - this script will implement the implementation of ML algorithms to predict the occurrence of incidents based on data on buildings, their state and incidents.
2. **citizen_appeals_prediction.py** - this script will implement the implementation of ML algorithms to predict the occurrence of citizen appeals based on text data.
3. **weather_prediction.py** - this script will implement the implementation of ML algorithms to predict weather conditions based on historical data.
### Visualization of results
We will visualize the results of predictive analysis using the following Python scripts:
1. **visualization_incidents.py** - this script will visualize the results of predicting incidents.
2. **visualization_citizen_appeals.py** - this script will visualize the results of predicting citizen appeals.
3. **visualization_weather.py** - this script will visualize the results of predicting weather conditions.
### Generation of optimal response scenarios for municipal services
We will implement the generation of optimal response scenarios for municipal services based on the results of predictive analysis using the following Python script:
1. **response_scenarios.py** - this script will generate optimal response scenarios for municipal services based on the results of predictive analysis.
### User Interface Development
We will develop a user interface (web or mobile) for interacting with the service and visualizing the results of predictive analysis using the following technologies:
1. **Frontend**: ReactJS
2. **Backend**: Node.js, Python
3. **REST API**: FastAPI
### Example code of the main programs:
#### Citizen Appeals Collector
```python
import requests
from datetime import datetime
import json
# Function for retrieving data from the "Our City" portal
def get_our_city_data(url, params):
response = requests.get(url, params=params)
if response.status_code == 200:
return response.json()
else:
print("Error retrieving data from the "Our City" portal")
return None
# Function for retrieving data from the "Unified Dispatch Center" information system
def get_dispatch_center_data(url, params):
response = requests.get(url, params=params)
if response.status_code == 200:
return response.json()
else:
print("Error retrieving data from the "Unified Dispatch Center" information system")
return None
# Function for collecting data on citizen appeals
def collect_citizen_appeals_data():
url = "
https://ourcity.ru/api/v1/citizen_appeals"
params = {
"from": "2022-01-01",
"to": "2023-01-01"
}
citizen_appeals_data = get_our_city_data(url, params)
if citizen_appeals_data is not None:
for appeal in citizen_appeals_data["data"]:
date_and_time = datetime.strptime(appeal["date_and_time"], "%Y-%m-%dT%H:%M:%SZ")
text = appeal["text"]
category = appeal["category"]
citizen_appeals_collector.insert_one({"date_and_time": date_and_time, "text": text, "category": category})
else:
print("Error collecting data on citizen appeals")
# Function for collecting data on weather conditions
def collect_weather_data():
url = "
https://api.openweathermap.org/data/2.5/forecast?q=Moscow&appid="
weather_data = get_weather_data(url, {})
if weather_data is not None:
for weather in weather_data["list"]:
date_and_time = datetime.strptime(weather["dt_txt"], "%Y-%m-%dT%H:%M:%SZ")
temperature = weather["main"]["temp"]
precipitation = weather["weather"][0]["id"]
wind = weather["wind"]["speed"]
humidity = weather["main"]["humidity"]
weather_collector.insert_one({"date_and_time": date_and_time, "temperature": temperature, "precipitation": precipitation, "wind": wind, "humidity": humidity})
else:
print("Error collecting data on weather conditions")
# Main function
def main():
collect_citizen_appeals_data()
collect_weather_data()
if __name__ == "__main__":
main()
```
#### Buildings State Collector
```python
import requests
from datetime import datetime
import json
# Function for retrieving data from the city database
def get_buildings_state_data(url, params):
response = requests.get(url, params=params)
if response.status_code == 200:
return response.json()
else:
print("Error retrieving data from the city database")
return None
# Function for collecting data on buildings state
def collect_buildings_state_data():
url = "
https://city.api.mos.ru/1.0/objects?type=building&status=active"
params = {
"from": "2022-01-01",
"to": "2023-01-01"
}
buildings_state_data = get_buildings_state_data(url, params)
if buildings_state_data is not None:
for building in buildings_state_data["data"]:
address = building["address"]
date_of_commissioning = datetime.strptime(building["date_of_commissioning"], "%Y-%m-%d")
general_house_consumption = building["general_house_consumption"]
number_of_floors = building["number_of_floors"]
number_of_residents = building["number_of_residents"]
number_of_rooms = building["number_of_rooms"]
area_in_square_meters = building["area_in_square_meters"]
state = building["state"]
geo_coordinates = building["geo_coordinates"]
buildings_state_collector.insert_one({"address": address, "date_of_commissioning": date_of_commissioning, "general_house_consumption": general_house_consumption, "number_of_floors": number_of_floors, "number_of_residents": number_of_residents, "number_of_rooms": number_of_rooms, "area_in_square_meters": area_in_square_meters, "state": state, "geo_coordinates": geo_coordinates})
else:
print("Error collecting data on buildings state")
# Main function
def main():
collect_buildings_state_data()
if __name__ == "__main__":
main()
```
#### Repairs History Collector
```python
import requests
from datetime import datetime
import json
# Function for retrieving data from the city database
def get_repairs_history_data(url, params):
response = requests.get(url, params=params)
if response.status_code == 200:
return response.json()
else:
print("Error retrieving data from the city database")
return None
# Function for collecting data on repairs history
def collect_repairs_history_data():
url = "
https://city.api.mos.ru/1.0/objects?type=building&status=active"
params = {
"from": "2022-01-01",
"to": "2023-01-01"
}
repairs_history_data = get_repairs_history_data(url, params)
if repairs_history_data is not None:
for repair in repairs_history_data["data"]:
date_of_commissioning = datetime.strptime(repair["date_of_commissioning"], "%Y-%m-%d")
building_id = repair["object_id"]
type = repair["type"]
start_date = datetime.strptime(repair["start_date"], "%Y-%m-%d")
end_date = datetime.strptime(repair["end_date"], "%Y-%m-%d")
repairs_history_collector.insert_one({"date_of_commissioning": date_of_commissioning, "building_id": building_id, "type": type, "start_date": start_date, "end_date": end_date})
else:
print("Error collecting data on repairs history")
# Main function
def main():
collect_repairs_history_data()
if __name__ == "__main__":
main()
```
#### Incidents Prediction
```python
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import CountVectorizer
# Function for loading data from the database
def load_data():
buildings_state_collector = pd.read_csv("buildings_state.csv")
incidents_collector = pd.read_csv("incidents.csv")
citizen_appeals_prediction = pd.read_csv("citizen_appeals.csv")
repairs_history_collector = pd.read_csv("repairs_history.csv")
weather_collector = pd.read_csv("weather.csv")
return buildings_state_collector, incidents_collector, citizen_appeals_prediction, repairs_history_collector, weather_collector
# Function for preprocessing data
def preprocess_data(buildings_state_collector, incidents_collector, citizen_appeals_prediction, repairs_history_collector, weather_collector):
buildings_state_collector.drop(columns=["etc"], inplace=True)
incidents_collector.drop(columns=["etc"], inplace=True)
citizen_appeals_prediction.drop(columns=["etc", "status"], inplace=True)
repairs_history_collector.drop(columns=["etc"], inplace=True)
weather_collector.drop(columns=["etc"], inplace=True)
data = pd.concat([buildings_state_collector, incidents_collector, citizen_appeals_prediction, repairs_history_collector, weather_collector], axis=1)
# Vectorize text data
vectorizer = CountVectorizer(stop_words="english")
X_text = vectorizer.fit_transform(data["description"])
# Combine numerical and textual features
X = pd.concat([data[["general_house_consumption", "number_of_floors", "number_of_residents", "number_of_rooms", "area_in_square_meters", "state"]], X_text), axis=1)
# Split data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, incidents_collector["status"], test_size=0.2)
return X_train, X_test, y_train, y_test, vectorizer
# Function for training and testing the model
def train_and_test_model(X_train, X_test, y_train, y_test, vectorizer):
clf = MultinomialNB()
clf.fit(X_train, y_train)
return clf
# Main function
def main():
buildings_state_collector, incidents_collector, citizen_appeals_prediction, repairs_history_collector, weather_collector = load_data()
X_train, X_test, y_train, y_test, vectorizer = preprocess_data(buildings_state_collector, incidents_collector, citizen_appeals_prediction, repairs_history_collector, weather_collector)
clf = train_and_test_model(X_train, X_test, y_train, y_test, vectorizer)
# Save the model to a file
with open("incidents_model.pickle", "wb") as file:
pickle.dump(clf, file)
if __name__ == "__main__":
main()
```
#### Citizen Appeals Prediction
```python
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
# Function for loading data from the database
def load_data():
citizen_appeals_collector = pd.read_csv("citizen_appeals.csv")
return citizen_appeals_collector
# Function for preprocessing data
def preprocess_data(citizen_appeals_collector):
citizen_appeals_collector.drop(columns=["etc", "status"], inplace=True)
# Vectorize text data
vectorizer = CountVectorizer(stop_words="english")
X_text = vectorizer.fit_transform(citizen_appeals_collector["text"])
# Combine numerical and textual features
X = pd.concat([citizen_appeals_collector[["date_and_time", "text", "category"]]], axis=1)
# Split data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, citizen_appeals_collector["status"], test_size=0.2)
return X_train, X_test, y_train, y_test, vectorizer
# Function for training and testing the model
def train_and_test_model(X_train, X_test, y_train, y_test, vectorizer):
clf = MultinomialNB()
clf.fit(X_train, y_train)
return clf
# Main function
def main():
citizen_appeals_collector = load_data()
X_train, X_test, y_train, y_test, vectorizer = preprocess_data(citizen_appeals_collector)
clf = train_and_test_model(X_train, X_test, y_train, y_test, vectorizer)
# Save the model to a file
with open("citizen_appeals_model.pickle", "wb") as file:
pickle.dump(clf, file)
if __name__ == "__main__":
main()
```
#### Weather Prediction
```python
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
# Function for loading data from the database
def load_data():
weather_collector = pd.read_csv("weather.csv")
return weather_collector
# Function for preprocessing data
def preprocess_data(weather_collector):
weather_collector.drop(columns=["etc"], inplace=True)
# Convert date_and_time to datetime object
weather_collector["date_and_time"] = pd.to_datetime(weather_collector["date_and_time"])
# Split data into training and test sets
X = weather_collector[["temperature", "precipitation", "wind", "humidity"]]
y = weather_collector["type"]
return X, y
# Function for training and testing the model
def train_and_test_model(X, y):
clf = LinearRegression()
clf.fit(X, y)
return clf
# Main function
def main():
weather_collector = load_data()
X, y = preprocess_data(weather_collector)
clf = train_and_test_model(X, y)
# Save the model to a file
with open("weather_model.pickle", "wb") as file:
pickle.dump(clf, file)
if __name__ == "__main__":
main()
```
#### Visualization of Results - Incidents
```python
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# Function for loading data from the database
def load_data():
incidents_collector = pd.read_csv("incidents.csv")
return incidents_collector
# Main function
def main():
incidents_collector = load_data()
# Visualize the results of predicting incidents
sns.set(style="whitegrid")
sns.countplot(x=incidents_collector["status"])
plt.show()
if __name__ == "__main__":
main()
```
#### Visualization of Results - Citizen Appeals
```python
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# Function for loading data from the database
def load_data():
citizen_appeals_collector = pd.read_csv("citizen_appeals.csv")
return citizen_appeals_collector
# Main function
def main():
citizen_appeals_collector = load_data()
# Visualize the results of predicting citizen appeals
sns.set(style="whitegrid")
sns.countplot(x=citizen_appeals_collector["status"])
plt.show()
if __name__ == "__main__":
main()
```
#### Visualization of Results - Weather
```python
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# Function for loading data from the database
def load_data():
weather_collector = pd.read_csv("weather.csv")
return weather_collector
# Main function
def main():
weather_collector = load_data()
# Visualize the results of predicting weather conditions
sns.set(style="whitegrid")
sns.countplot(x=weather_collector["type"])
plt.show()
if __name__ == "__main__":
main()
```
#### Response Scenarios
```python
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import CountVectorizer
# Function for loading data from the database
def load_data():
buildings_state_collector = pd.read_csv("buildings_state.csv")
incidents_collector = pd.read_csv("incidents.csv")
citizen_appeals_prediction = pd.read_csv("citizen_appeals.csv")
repairs_history_collector = pd.read_csv("repairs_history.csv")
weather_collector = pd.read_csv("weather.csv")
return buildings_state_collector, incidents_collector, citizen_appeals_prediction, repairs_history_collector, weather_collector
# Function for preprocessing data
def preprocess_data(buildings_state_collector, incidents_collector, citizen_appeals_prediction, repairs_history_collector, weather_collector):
buildings_state_collector.drop(columns=["etc"], inplace=True)
incidents_collector.drop(columns=["etc"], inplace=True)
citizen_appeals_prediction.drop(columns=["etc", "status"], inplace=True)
repairs_history_collector.drop(columns=["etc"], inplace=True)
weather_collector.drop(columns=["etc"], inplace=True)
data = pd.concat([buildings_state_collector, incidents_collector, citizen_appeals_prediction, repairs_history_collector, weather_collector], axis=1)
# Vectorize text data
vectorizer = CountVectorizer(stop_words="english")
X_text = vectorizer.fit_transform(data["description"])
# Combine numerical and textual features
X = pd.concat([data[["general_house_consumption", "number_of_floors", "number_of_residents", "number_of_rooms", "area_in_square_meters", "state"]], X_text), axis=1)
# Split data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, incidents_collector["status"], test_size=0.2)
return X_train, X_test, y_train, y_test, vectorizer
# Function for training and testing the model
def train_and_test_model(X_train, X_test, y_train, y_test, vectorizer):
clf = MultinomialNB()
clf.fit(X_train, y_train)
return clf
# Function for generating optimal response scenarios for municipal services
def generate_response_scenarios():
buildings_state_collector, incidents_collector, citizen_appeals_prediction, repairs_history_collector, weather_collector = load_data()
X_train, X_test, y_train, y_test, vectorizer = preprocess_data(buildings_state_collector, incidents_collector, citizen_appeals_prediction, repairs_history_collector, weather_collector)
clf = train_and_test_model(X_train, X_test, y_train, y_test, vectorizer)
# Generate optimal response scenarios for municipal services
response_scenarios = []
for index, row in incidents_collector.iterrows():
prediction = clf.predict([row["general_house_consumption"], row["number_of_floors"], row["number_of_residents"], row["number_of_rooms"], row["area_in_square_meters"], row["state"], row["description"]])
response_scenarios.append(prediction[0])
return response_scenarios
# Main function
def main():
response_scenarios = generate_response_scenarios()
# Save the response scenarios to a file
with open("response_scenarios.txt", "w") as file:
for scenario in response_scenarios:
file.write(str(scenario) + "\n")
if __name__ == "__main__":
main()
```
#### User Interface Development - Frontend
```javascript
import React from 'react';
import { BrowserRouter as Router, Route, Switch } from 'react-router-dom';
import Home from './Home';
import VisualizationIncidents from './VisualizationIncidents';
import VisualizationCitizenAppeals from './VisualizationCitizenAppeals';
import VisualizationWeather from './VisualizationWeather';
import ResponseScenarios from './ResponseScenarios';
function App() {
return (
);
}
export default App;
```
#### User Interface Development - Backend
```python
from flask import Flask, jsonify
import pickle
app = Flask(__name__)
@app.route("/api/v1/visualization-incidents")
def visualization_incidents():
# Load the model from a file
with open("incidents_model.pickle", "rb") as file:
clf = pickle.load(file)
# Call the function for visualizing the results of predicting incidents
incidents_visualization()
return jsonify({"status": "success"})
@app.route("/api/v1/visualization-citizen-appeals")
def visualization_citizen_appeals():
# Load the model from a file
with open("citizen_appeals_model.pickle", "rb") as file:
clf = pickle.load(file)
# Call the function for visualizing the results of predicting citizen appeals
citizen_appeals_visualization()
return jsonify({"status": "success"})
@app.route("/api/v1/visualization-weather")
def visualization_weather():
# Load the model from a file
with open("weather_model.pickle", "rb") as file:
clf = pickle.load(file)
# Call the function for visualizing the results of predicting weather conditions
weather_visualization()
return jsonify({"status": "success"})
@app.route("/api/v1/response-scenarios")
def response_scenarios():
# Load the model from a file
with open("response_scenarios.txt", "r") as file:
response_scenarios = file.readlines()
return jsonify({"response_scenarios": response_scenarios})
if __name__ == "__main__":
app.run(debug=True)
```
#### User Interface Development - REST API
```python
from fastapi import FastAPI, HTTPException, Depends
import pickle
from typing import List
from pydantic import BaseModel
app = FastAPI()
class IncidentResponse(BaseModel):
id: int
date_and_time: str
building_id: int
description: str
status: str
@app.get("/api/v1/visualization-incidents")
async def visualization_incidents():
# Load the model from a file
with open("incidents_model.pickle", "rb") as file:
clf = pickle.load(file)
# Call the function for visualizing the results of predicting incidents
incidents_visualization()
return {"status": "success"}
@app.get("/api/v1/visualization-citizen-appeals")
async def visualization_citizen_appeals():
# Load the model from a file
with open("citizen_appeals_model.pickle", "rb") as file:
clf = pickle.load(file)
# Call the function for visualizing the results of predicting citizen appeals
citizen_appeals_visualization()
return {"status": "success"}
@app.get("/api/v1/visualization-weather")
async def visualization_weather():
# Load the model from a file
with open("weather_model.pickle", "rb") as file:
clf = pickle.load(file)
# Call the function for visualizing the results of predicting weather conditions
weather_visualization()
return {"status": "success"}
@app.get("/api/v1/response-scenarios")
async def response_scenarios():
# Load the response scenarios from a file
with open("response_scenarios.txt", "r") as file:
response_scenarios = file.readlines()
return {"response_scenarios": response_scenarios}
if __name__ == "__main__":
app.run(debug=True)
```