DavMelchi commited on
Commit
0b898d7
·
1 Parent(s): 092401b

test panel

Browse files
panel_app/convert_to_excel_panel.py ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import io
2
+ from typing import Iterable, Sequence
3
+
4
+ import pandas as pd
5
+
6
+
7
+ def write_dfs_to_excel(
8
+ dfs: Sequence[pd.DataFrame], sheet_names: Sequence[str], index: bool = True
9
+ ) -> bytes:
10
+ """Simple Excel export for Panel.
11
+
12
+ Writes the given DataFrames to an in-memory XLSX file and returns the bytes.
13
+ No Streamlit dependency and no heavy formatting, to keep Panel exports fast
14
+ and avoid Streamlit runtime warnings.
15
+ """
16
+ bytes_io = io.BytesIO()
17
+ with pd.ExcelWriter(bytes_io, engine="xlsxwriter") as writer:
18
+ for df, name in zip(dfs, sheet_names):
19
+ # Ensure we always write a valid DataFrame, even if None was passed
20
+ safe_df = df if isinstance(df, pd.DataFrame) else pd.DataFrame()
21
+ safe_df.to_excel(writer, sheet_name=str(name), index=index)
22
+
23
+ return bytes_io.getvalue()
panel_app/trafic_analysis_panel.py ADDED
@@ -0,0 +1,2331 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import io
2
+ import os
3
+ import sys
4
+ import zipfile
5
+ from datetime import date, timedelta
6
+
7
+ import numpy as np
8
+ import pandas as pd
9
+ import panel as pn
10
+ import plotly.express as px
11
+
12
+ ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
13
+ if ROOT_DIR not in sys.path:
14
+ sys.path.insert(0, ROOT_DIR)
15
+
16
+ from panel_app.convert_to_excel_panel import write_dfs_to_excel
17
+ from utils.utils_vars import get_physical_db
18
+
19
+ pn.extension("plotly", "tabulator")
20
+
21
+
22
+ def read_fileinput_to_df(file_input: pn.widgets.FileInput) -> pd.DataFrame | None:
23
+ """Read a Panel FileInput (ZIP or CSV) into a DataFrame.
24
+
25
+ Returns None if no file is provided.
26
+ """
27
+ if file_input is None or not file_input.value:
28
+ return None
29
+
30
+ filename = (file_input.filename or "").lower()
31
+ data = io.BytesIO(file_input.value)
32
+
33
+ if filename.endswith(".zip"):
34
+ with zipfile.ZipFile(data) as z:
35
+ csv_files = [f for f in z.namelist() if f.lower().endswith(".csv")]
36
+ if not csv_files:
37
+ raise ValueError("No CSV file found in the ZIP archive")
38
+ with z.open(csv_files[0]) as f:
39
+ return pd.read_csv(f, encoding="latin1", sep=";", low_memory=False)
40
+ elif filename.endswith(".csv"):
41
+ return pd.read_csv(data, encoding="latin1", sep=";", low_memory=False)
42
+ else:
43
+ raise ValueError("Unsupported file format. Please upload a ZIP or CSV file.")
44
+
45
+
46
+ def extract_code(name):
47
+ name = name.replace(" ", "_") if isinstance(name, str) else None
48
+ if name and len(name) >= 10:
49
+ try:
50
+ return int(name.split("_")[0])
51
+ except ValueError:
52
+ return None
53
+ return None
54
+
55
+
56
+ def preprocess_2g(df: pd.DataFrame) -> pd.DataFrame:
57
+ df = df[df["BCF name"].str.len() >= 10].copy()
58
+ df["2g_data_trafic"] = ((df["TRAFFIC_PS DL"] + df["PS_UL_Load"]) / 1000).round(1)
59
+ df.rename(columns={"2G_Carried Traffic": "2g_voice_trafic"}, inplace=True)
60
+ df["code"] = df["BCF name"].apply(extract_code)
61
+ df["code"] = pd.to_numeric(df["code"], errors="coerce")
62
+ df = df[df["code"].notna()]
63
+ df["code"] = df["code"].astype(int)
64
+ date_format = (
65
+ "%m.%d.%Y %H:%M:%S" if len(df["PERIOD_START_TIME"].iat[0]) > 10 else "%m.%d.%Y"
66
+ )
67
+ df["date"] = pd.to_datetime(df["PERIOD_START_TIME"], format=date_format)
68
+ df["ID"] = df["date"].astype(str) + "_" + df["code"].astype(str)
69
+
70
+ if "TCH availability ratio" in df.columns:
71
+ df["2g_tch_avail"] = pd.to_numeric(
72
+ df["TCH availability ratio"], errors="coerce"
73
+ )
74
+
75
+ agg_dict = {
76
+ "2g_data_trafic": "sum",
77
+ "2g_voice_trafic": "sum",
78
+ }
79
+ if "2g_tch_avail" in df.columns:
80
+ agg_dict["2g_tch_avail"] = "mean"
81
+
82
+ df = df.groupby(["date", "ID", "code"], as_index=False).agg(agg_dict)
83
+ return df
84
+
85
+
86
+ def preprocess_3g(df: pd.DataFrame) -> pd.DataFrame:
87
+ df = df[df["WBTS name"].str.len() >= 10].copy()
88
+ df["code"] = df["WBTS name"].apply(extract_code)
89
+ df["code"] = pd.to_numeric(df["code"], errors="coerce")
90
+ df = df[df["code"].notna()]
91
+ df["code"] = df["code"].astype(int)
92
+ date_format = (
93
+ "%m.%d.%Y %H:%M:%S" if len(df["PERIOD_START_TIME"].iat[0]) > 10 else "%m.%d.%Y"
94
+ )
95
+ df["date"] = pd.to_datetime(df["PERIOD_START_TIME"], format=date_format)
96
+ df["ID"] = df["date"].astype(str) + "_" + df["code"].astype(str)
97
+ df.rename(
98
+ columns={
99
+ "Total CS traffic - Erl": "3g_voice_trafic",
100
+ "Total_Data_Traffic": "3g_data_trafic",
101
+ },
102
+ inplace=True,
103
+ )
104
+
105
+ kpi_col = None
106
+ for col in df.columns:
107
+ if "cell availability" in str(col).lower():
108
+ kpi_col = col
109
+ break
110
+
111
+ if kpi_col is not None:
112
+ df["3g_cell_avail"] = pd.to_numeric(df[kpi_col], errors="coerce")
113
+
114
+ agg_dict = {
115
+ "3g_voice_trafic": "sum",
116
+ "3g_data_trafic": "sum",
117
+ }
118
+ if "3g_cell_avail" in df.columns:
119
+ agg_dict["3g_cell_avail"] = "mean"
120
+
121
+ df = df.groupby(["date", "ID", "code"], as_index=False).agg(agg_dict)
122
+ return df
123
+
124
+
125
+ def preprocess_lte(df: pd.DataFrame) -> pd.DataFrame:
126
+ df = df[df["LNBTS name"].str.len() >= 10].copy()
127
+ df["lte_data_trafic"] = (
128
+ df["4G/LTE DL Traffic Volume (GBytes)"]
129
+ + df["4G/LTE UL Traffic Volume (GBytes)"]
130
+ )
131
+ df["code"] = df["LNBTS name"].apply(extract_code)
132
+ df["code"] = pd.to_numeric(df["code"], errors="coerce")
133
+ df = df[df["code"].notna()]
134
+ df["code"] = df["code"].astype(int)
135
+ date_format = (
136
+ "%m.%d.%Y %H:%M:%S" if len(df["PERIOD_START_TIME"].iat[0]) > 10 else "%m.%d.%Y"
137
+ )
138
+ df["date"] = pd.to_datetime(df["PERIOD_START_TIME"], format=date_format)
139
+ df["ID"] = df["date"].astype(str) + "_" + df["code"].astype(str)
140
+ if "Cell Avail excl BLU" in df.columns:
141
+ df["lte_cell_avail"] = pd.to_numeric(df["Cell Avail excl BLU"], errors="coerce")
142
+
143
+ agg_dict = {"lte_data_trafic": "sum"}
144
+ if "lte_cell_avail" in df.columns:
145
+ agg_dict["lte_cell_avail"] = "mean"
146
+
147
+ df = df.groupby(["date", "ID", "code"], as_index=False).agg(agg_dict)
148
+ return df
149
+
150
+
151
+ def merge_and_compare(df_2g, df_3g, df_lte, pre_range, post_range, last_period_range):
152
+ physical_db = get_physical_db()
153
+ physical_db["code"] = physical_db["Code_Sector"].str.split("_").str[0]
154
+ physical_db["code"] = (
155
+ pd.to_numeric(physical_db["code"], errors="coerce").fillna(0).astype(int)
156
+ )
157
+ physical_db = physical_db[["code", "Longitude", "Latitude", "City"]]
158
+ physical_db = physical_db.drop_duplicates(subset="code")
159
+
160
+ df = pd.merge(df_2g, df_3g, on=["date", "ID", "code"], how="outer")
161
+ df = pd.merge(df, df_lte, on=["date", "ID", "code"], how="outer")
162
+
163
+ for col in [
164
+ "2g_data_trafic",
165
+ "2g_voice_trafic",
166
+ "3g_voice_trafic",
167
+ "3g_data_trafic",
168
+ "lte_data_trafic",
169
+ ]:
170
+ if col not in df:
171
+ df[col] = 0
172
+
173
+ kpi_masks = {}
174
+ for kpi_col in ["2g_tch_avail", "3g_cell_avail", "lte_cell_avail"]:
175
+ if kpi_col in df.columns:
176
+ kpi_masks[kpi_col] = df[kpi_col].notna()
177
+
178
+ df.fillna(0, inplace=True)
179
+
180
+ for kpi_col, mask in kpi_masks.items():
181
+ df.loc[~mask, kpi_col] = np.nan
182
+
183
+ df["total_voice_trafic"] = df["2g_voice_trafic"] + df["3g_voice_trafic"]
184
+ df["total_data_trafic"] = (
185
+ df["2g_data_trafic"] + df["3g_data_trafic"] + df["lte_data_trafic"]
186
+ )
187
+ df = pd.merge(df, physical_db, on=["code"], how="left")
188
+
189
+ pre_start, pre_end = pd.to_datetime(pre_range[0]), pd.to_datetime(pre_range[1])
190
+ post_start, post_end = pd.to_datetime(post_range[0]), pd.to_datetime(post_range[1])
191
+ last_period_start, last_period_end = pd.to_datetime(
192
+ last_period_range[0]
193
+ ), pd.to_datetime(last_period_range[1])
194
+
195
+ last_period = df[
196
+ (df["date"] >= last_period_start) & (df["date"] <= last_period_end)
197
+ ]
198
+
199
+ def assign_period(x):
200
+ if pre_start <= x <= pre_end:
201
+ return "pre"
202
+ if post_start <= x <= post_end:
203
+ return "post"
204
+ return "other"
205
+
206
+ df["period"] = df["date"].apply(assign_period)
207
+
208
+ comparison = df[df["period"].isin(["pre", "post"])]
209
+
210
+ sum_pivot = (
211
+ comparison.groupby(["code", "period"])[
212
+ ["total_voice_trafic", "total_data_trafic"]
213
+ ]
214
+ .sum()
215
+ .unstack()
216
+ )
217
+ sum_pivot.columns = [f"{metric}_{period}" for metric, period in sum_pivot.columns]
218
+ sum_pivot = sum_pivot.reset_index()
219
+
220
+ sum_pivot["total_voice_trafic_diff"] = (
221
+ sum_pivot["total_voice_trafic_post"] - sum_pivot["total_voice_trafic_pre"]
222
+ )
223
+ sum_pivot["total_data_trafic_diff"] = (
224
+ sum_pivot["total_data_trafic_post"] - sum_pivot["total_data_trafic_pre"]
225
+ )
226
+
227
+ for metric in ["total_voice_trafic", "total_data_trafic"]:
228
+ sum_pivot[f"{metric}_diff_pct"] = (
229
+ (sum_pivot.get(f"{metric}_post", 0) - sum_pivot.get(f"{metric}_pre", 0))
230
+ / sum_pivot.get(f"{metric}_pre", 1)
231
+ ) * 100
232
+
233
+ sum_order = [
234
+ "code",
235
+ "total_voice_trafic_pre",
236
+ "total_voice_trafic_post",
237
+ "total_voice_trafic_diff",
238
+ "total_voice_trafic_diff_pct",
239
+ "total_data_trafic_pre",
240
+ "total_data_trafic_post",
241
+ "total_data_trafic_diff",
242
+ "total_data_trafic_diff_pct",
243
+ ]
244
+ sum_existing_cols = [col for col in sum_order if col in sum_pivot.columns]
245
+ sum_remaining_cols = [
246
+ col for col in sum_pivot.columns if col not in sum_existing_cols
247
+ ]
248
+ sum_pivot = sum_pivot[sum_existing_cols + sum_remaining_cols]
249
+
250
+ avg_pivot = (
251
+ comparison.groupby(["code", "period"])[
252
+ ["total_voice_trafic", "total_data_trafic"]
253
+ ]
254
+ .mean()
255
+ .unstack()
256
+ )
257
+ avg_pivot.columns = [f"{metric}_{period}" for metric, period in avg_pivot.columns]
258
+ avg_pivot = avg_pivot.reset_index()
259
+
260
+ avg_pivot["total_voice_trafic_diff"] = (
261
+ avg_pivot["total_voice_trafic_post"] - avg_pivot["total_voice_trafic_pre"]
262
+ )
263
+ avg_pivot["total_data_trafic_diff"] = (
264
+ avg_pivot["total_data_trafic_post"] - avg_pivot["total_data_trafic_pre"]
265
+ )
266
+
267
+ for metric in ["total_voice_trafic", "total_data_trafic"]:
268
+ avg_pivot[f"{metric}_diff_pct"] = (
269
+ (avg_pivot.get(f"{metric}_post", 0) - avg_pivot.get(f"{metric}_pre", 0))
270
+ / avg_pivot.get(f"{metric}_pre", 1)
271
+ ) * 100
272
+
273
+ avg_pivot = avg_pivot.rename(
274
+ columns={
275
+ "total_voice_trafic_pre": "avg_voice_trafic_pre",
276
+ "total_voice_trafic_post": "avg_voice_trafic_post",
277
+ "total_voice_trafic_diff": "avg_voice_trafic_diff",
278
+ "total_voice_trafic_diff_pct": "avg_voice_trafic_diff_pct",
279
+ "total_data_trafic_pre": "avg_data_trafic_pre",
280
+ "total_data_trafic_post": "avg_data_trafic_post",
281
+ "total_data_trafic_diff": "avg_data_trafic_diff",
282
+ "total_data_trafic_diff_pct": "avg_data_trafic_diff_pct",
283
+ }
284
+ )
285
+
286
+ avg_order = [
287
+ "code",
288
+ "avg_voice_trafic_pre",
289
+ "avg_voice_trafic_post",
290
+ "avg_voice_trafic_diff",
291
+ "avg_voice_trafic_diff_pct",
292
+ "avg_data_trafic_pre",
293
+ "avg_data_trafic_post",
294
+ "avg_data_trafic_diff",
295
+ "avg_data_trafic_diff_pct",
296
+ ]
297
+ avg_existing_cols = [col for col in avg_order if col in avg_pivot.columns]
298
+ avg_remaining_cols = [
299
+ col for col in avg_pivot.columns if col not in avg_existing_cols
300
+ ]
301
+ avg_pivot = avg_pivot[avg_existing_cols + avg_remaining_cols]
302
+
303
+ return df, last_period, sum_pivot.round(2), avg_pivot.round(2)
304
+
305
+
306
+ def analyze_2g_availability(df: pd.DataFrame, sla_2g: float):
307
+ avail_col = "2g_tch_avail"
308
+
309
+ if avail_col not in df.columns or "period" not in df.columns:
310
+ return None, None
311
+
312
+ df_2g = df[df[avail_col].notna()].copy()
313
+ df_2g = df_2g[df_2g["period"].isin(["pre", "post"])]
314
+
315
+ if df_2g.empty:
316
+ return None, None
317
+
318
+ site_pivot = df_2g.groupby(["code", "period"])[avail_col].mean().unstack()
319
+
320
+ site_pivot = site_pivot.rename(
321
+ columns={"pre": "tch_avail_pre", "post": "tch_avail_post"}
322
+ )
323
+
324
+ if "tch_avail_pre" not in site_pivot.columns:
325
+ site_pivot["tch_avail_pre"] = pd.NA
326
+ if "tch_avail_post" not in site_pivot.columns:
327
+ site_pivot["tch_avail_post"] = pd.NA
328
+
329
+ site_pivot["tch_avail_diff"] = (
330
+ site_pivot["tch_avail_post"] - site_pivot["tch_avail_pre"]
331
+ )
332
+ site_pivot["pre_ok_vs_sla"] = site_pivot["tch_avail_pre"] >= sla_2g
333
+ site_pivot["post_ok_vs_sla"] = site_pivot["tch_avail_post"] >= sla_2g
334
+
335
+ site_pivot = site_pivot.reset_index()
336
+
337
+ summary_rows = []
338
+ for period_label, col_name in [
339
+ ("pre", "tch_avail_pre"),
340
+ ("post", "tch_avail_post"),
341
+ ]:
342
+ series = site_pivot[col_name].dropna()
343
+ total_cells = series.shape[0]
344
+ if total_cells == 0:
345
+ summary_rows.append(
346
+ {
347
+ "period": period_label,
348
+ "cells": 0,
349
+ "avg_availability": pd.NA,
350
+ "median_availability": pd.NA,
351
+ "p05_availability": pd.NA,
352
+ "p95_availability": pd.NA,
353
+ "min_availability": pd.NA,
354
+ "max_availability": pd.NA,
355
+ "cells_ge_sla": 0,
356
+ "cells_lt_sla": 0,
357
+ "pct_cells_ge_sla": pd.NA,
358
+ }
359
+ )
360
+ continue
361
+ cells_ge_sla = (series >= sla_2g).sum()
362
+ cells_lt_sla = (series < sla_2g).sum()
363
+ summary_rows.append(
364
+ {
365
+ "period": period_label,
366
+ "cells": int(total_cells),
367
+ "avg_availability": series.mean(),
368
+ "median_availability": series.median(),
369
+ "p05_availability": series.quantile(0.05),
370
+ "p95_availability": series.quantile(0.95),
371
+ "min_availability": series.min(),
372
+ "max_availability": series.max(),
373
+ "cells_ge_sla": int(cells_ge_sla),
374
+ "cells_lt_sla": int(cells_lt_sla),
375
+ "pct_cells_ge_sla": cells_ge_sla / total_cells * 100,
376
+ }
377
+ )
378
+
379
+ summary_df = pd.DataFrame(summary_rows)
380
+
381
+ return summary_df, site_pivot
382
+
383
+
384
+ def analyze_3g_availability(df: pd.DataFrame, sla_3g: float):
385
+ avail_col = "3g_cell_avail"
386
+
387
+ if avail_col not in df.columns or "period" not in df.columns:
388
+ return None, None
389
+
390
+ df_3g = df[df[avail_col].notna()].copy()
391
+ df_3g = df_3g[df_3g["period"].isin(["pre", "post"])]
392
+
393
+ if df_3g.empty:
394
+ return None, None
395
+
396
+ site_pivot = df_3g.groupby(["code", "period"])[avail_col].mean().unstack()
397
+
398
+ site_pivot = site_pivot.rename(
399
+ columns={"pre": "cell_avail_pre", "post": "cell_avail_post"}
400
+ )
401
+
402
+ if "cell_avail_pre" not in site_pivot.columns:
403
+ site_pivot["cell_avail_pre"] = pd.NA
404
+ if "cell_avail_post" not in site_pivot.columns:
405
+ site_pivot["cell_avail_post"] = pd.NA
406
+
407
+ site_pivot["cell_avail_diff"] = (
408
+ site_pivot["cell_avail_post"] - site_pivot["cell_avail_pre"]
409
+ )
410
+ site_pivot["pre_ok_vs_sla"] = site_pivot["cell_avail_pre"] >= sla_3g
411
+ site_pivot["post_ok_vs_sla"] = site_pivot["cell_avail_post"] >= sla_3g
412
+
413
+ site_pivot = site_pivot.reset_index()
414
+
415
+ summary_rows = []
416
+ for period_label, col_name in [
417
+ ("pre", "cell_avail_pre"),
418
+ ("post", "cell_avail_post"),
419
+ ]:
420
+ series = site_pivot[col_name].dropna()
421
+ total_cells = series.shape[0]
422
+ if total_cells == 0:
423
+ summary_rows.append(
424
+ {
425
+ "period": period_label,
426
+ "cells": 0,
427
+ "avg_availability": pd.NA,
428
+ "median_availability": pd.NA,
429
+ "p05_availability": pd.NA,
430
+ "p95_availability": pd.NA,
431
+ "min_availability": pd.NA,
432
+ "max_availability": pd.NA,
433
+ "cells_ge_sla": 0,
434
+ "cells_lt_sla": 0,
435
+ "pct_cells_ge_sla": pd.NA,
436
+ }
437
+ )
438
+ continue
439
+ cells_ge_sla = (series >= sla_3g).sum()
440
+ cells_lt_sla = (series < sla_3g).sum()
441
+ summary_rows.append(
442
+ {
443
+ "period": period_label,
444
+ "cells": int(total_cells),
445
+ "avg_availability": series.mean(),
446
+ "median_availability": series.median(),
447
+ "p05_availability": series.quantile(0.05),
448
+ "p95_availability": series.quantile(0.95),
449
+ "min_availability": series.min(),
450
+ "max_availability": series.max(),
451
+ "cells_ge_sla": int(cells_ge_sla),
452
+ "cells_lt_sla": int(cells_lt_sla),
453
+ "pct_cells_ge_sla": cells_ge_sla / total_cells * 100,
454
+ }
455
+ )
456
+
457
+ summary_df = pd.DataFrame(summary_rows)
458
+
459
+ return summary_df, site_pivot
460
+
461
+
462
+ def analyze_lte_availability(df: pd.DataFrame, sla_lte: float):
463
+ avail_col = "lte_cell_avail"
464
+
465
+ if avail_col not in df.columns or "period" not in df.columns:
466
+ return None, None
467
+
468
+ df_lte = df[df[avail_col].notna()].copy()
469
+ df_lte = df_lte[df_lte["period"].isin(["pre", "post"])]
470
+
471
+ if df_lte.empty:
472
+ return None, None
473
+
474
+ site_pivot = df_lte.groupby(["code", "period"])[avail_col].mean().unstack()
475
+
476
+ site_pivot = site_pivot.rename(
477
+ columns={"pre": "lte_avail_pre", "post": "lte_avail_post"}
478
+ )
479
+
480
+ if "lte_avail_pre" not in site_pivot.columns:
481
+ site_pivot["lte_avail_pre"] = pd.NA
482
+ if "lte_avail_post" not in site_pivot.columns:
483
+ site_pivot["lte_avail_post"] = pd.NA
484
+
485
+ site_pivot["lte_avail_diff"] = (
486
+ site_pivot["lte_avail_post"] - site_pivot["lte_avail_pre"]
487
+ )
488
+ site_pivot["pre_ok_vs_sla"] = site_pivot["lte_avail_pre"] >= sla_lte
489
+ site_pivot["post_ok_vs_sla"] = site_pivot["lte_avail_post"] >= sla_lte
490
+
491
+ site_pivot = site_pivot.reset_index()
492
+
493
+ summary_rows = []
494
+ for period_label, col_name in [
495
+ ("pre", "lte_avail_pre"),
496
+ ("post", "lte_avail_post"),
497
+ ]:
498
+ series = site_pivot[col_name].dropna()
499
+ total_cells = series.shape[0]
500
+ if total_cells == 0:
501
+ summary_rows.append(
502
+ {
503
+ "period": period_label,
504
+ "cells": 0,
505
+ "avg_availability": pd.NA,
506
+ "median_availability": pd.NA,
507
+ "p05_availability": pd.NA,
508
+ "p95_availability": pd.NA,
509
+ "min_availability": pd.NA,
510
+ "max_availability": pd.NA,
511
+ "cells_ge_sla": 0,
512
+ "cells_lt_sla": 0,
513
+ "pct_cells_ge_sla": pd.NA,
514
+ }
515
+ )
516
+ continue
517
+ cells_ge_sla = (series >= sla_lte).sum()
518
+ cells_lt_sla = (series < sla_lte).sum()
519
+ summary_rows.append(
520
+ {
521
+ "period": period_label,
522
+ "cells": int(total_cells),
523
+ "avg_availability": series.mean(),
524
+ "median_availability": series.median(),
525
+ "p05_availability": series.quantile(0.05),
526
+ "p95_availability": series.quantile(0.95),
527
+ "min_availability": series.min(),
528
+ "max_availability": series.max(),
529
+ "cells_ge_sla": int(cells_ge_sla),
530
+ "cells_lt_sla": int(cells_lt_sla),
531
+ "pct_cells_ge_sla": cells_ge_sla / total_cells * 100,
532
+ }
533
+ )
534
+
535
+ summary_df = pd.DataFrame(summary_rows)
536
+
537
+ return summary_df, site_pivot
538
+
539
+
540
+ def analyze_multirat_availability(
541
+ df: pd.DataFrame, sla_2g: float, sla_3g: float, sla_lte: float
542
+ ):
543
+ if "period" not in df.columns:
544
+ return None
545
+
546
+ rat_cols = []
547
+ if "2g_tch_avail" in df.columns:
548
+ rat_cols.append("2g_tch_avail")
549
+ if "3g_cell_avail" in df.columns:
550
+ rat_cols.append("3g_cell_avail")
551
+ if "lte_cell_avail" in df.columns:
552
+ rat_cols.append("lte_cell_avail")
553
+
554
+ if not rat_cols:
555
+ return None
556
+
557
+ agg_dict = {col: "mean" for col in rat_cols}
558
+
559
+ df_pre = df[df["period"] == "pre"]
560
+ df_post = df[df["period"] == "post"]
561
+
562
+ pre = df_pre.groupby("code", as_index=False).agg(agg_dict)
563
+ post = df_post.groupby("code", as_index=False).agg(agg_dict)
564
+
565
+ rename_map_pre = {
566
+ "2g_tch_avail": "2g_avail_pre",
567
+ "3g_cell_avail": "3g_avail_pre",
568
+ "lte_cell_avail": "lte_avail_pre",
569
+ }
570
+ rename_map_post = {
571
+ "2g_tch_avail": "2g_avail_post",
572
+ "3g_cell_avail": "3g_avail_post",
573
+ "lte_cell_avail": "lte_avail_post",
574
+ }
575
+
576
+ pre = pre.rename(columns=rename_map_pre)
577
+ post = post.rename(columns=rename_map_post)
578
+
579
+ multi = pd.merge(pre, post, on="code", how="outer")
580
+
581
+ if not df_post.empty and {
582
+ "total_voice_trafic",
583
+ "total_data_trafic",
584
+ }.issubset(df_post.columns):
585
+ post_traffic = (
586
+ df_post.groupby("code", as_index=False)[
587
+ ["total_voice_trafic", "total_data_trafic"]
588
+ ]
589
+ .sum()
590
+ .rename(
591
+ columns={
592
+ "total_voice_trafic": "post_total_voice_trafic",
593
+ "total_data_trafic": "post_total_data_trafic",
594
+ }
595
+ )
596
+ )
597
+ multi = pd.merge(multi, post_traffic, on="code", how="left")
598
+
599
+ if "City" in df.columns:
600
+ city_df = df[["code", "City"]].drop_duplicates("code")
601
+ multi = pd.merge(multi, city_df, on="code", how="left")
602
+
603
+ def _ok_flag(series: pd.Series, sla: float) -> pd.Series:
604
+ if series.name not in multi.columns:
605
+ return pd.Series([pd.NA] * len(multi), index=multi.index)
606
+ ok = multi[series.name] >= sla
607
+ ok = ok.where(multi[series.name].notna(), pd.NA)
608
+ return ok
609
+
610
+ if "2g_avail_post" in multi.columns:
611
+ multi["ok_2g_post"] = _ok_flag(multi["2g_avail_post"], sla_2g)
612
+ if "3g_avail_post" in multi.columns:
613
+ multi["ok_3g_post"] = _ok_flag(multi["3g_avail_post"], sla_3g)
614
+ if "lte_avail_post" in multi.columns:
615
+ multi["ok_lte_post"] = _ok_flag(multi["lte_avail_post"], sla_lte)
616
+
617
+ def classify_row(row):
618
+ rats_status = []
619
+ for rat, col in [
620
+ ("2G", "ok_2g_post"),
621
+ ("3G", "ok_3g_post"),
622
+ ("LTE", "ok_lte_post"),
623
+ ]:
624
+ if col in row and not pd.isna(row[col]):
625
+ rats_status.append((rat, bool(row[col])))
626
+
627
+ if not rats_status:
628
+ return "No RAT data"
629
+
630
+ bad_rats = [rat for rat, ok in rats_status if not ok]
631
+ if not bad_rats:
632
+ return "OK all RAT"
633
+ if len(bad_rats) == 1:
634
+ return f"Degraded {bad_rats[0]} only"
635
+ return "Degraded multi-RAT (" + ",".join(bad_rats) + ")"
636
+
637
+ multi["post_multirat_status"] = multi.apply(classify_row, axis=1)
638
+
639
+ ordered_cols = ["code"]
640
+ if "City" in multi.columns:
641
+ ordered_cols.append("City")
642
+ for col in [
643
+ "2g_avail_pre",
644
+ "2g_avail_post",
645
+ "3g_avail_pre",
646
+ "3g_avail_post",
647
+ "lte_avail_pre",
648
+ "lte_avail_post",
649
+ "post_total_voice_trafic",
650
+ "post_total_data_trafic",
651
+ "ok_2g_post",
652
+ "ok_3g_post",
653
+ "ok_lte_post",
654
+ "post_multirat_status",
655
+ ]:
656
+ if col in multi.columns:
657
+ ordered_cols.append(col)
658
+
659
+ remaining_cols = [c for c in multi.columns if c not in ordered_cols]
660
+ multi = multi[ordered_cols + remaining_cols]
661
+
662
+ return multi
663
+
664
+
665
+ def analyze_persistent_availability(
666
+ df: pd.DataFrame,
667
+ multi_rat_df: pd.DataFrame,
668
+ sla_2g: float,
669
+ sla_3g: float,
670
+ sla_lte: float,
671
+ min_consecutive_days: int = 3,
672
+ ) -> pd.DataFrame:
673
+ if df is None or df.empty:
674
+ return pd.DataFrame()
675
+ if "date" not in df.columns or "code" not in df.columns:
676
+ return pd.DataFrame()
677
+
678
+ work_df = df.copy()
679
+ work_df["date_only"] = work_df["date"].dt.date
680
+
681
+ site_stats = {}
682
+
683
+ def _update_stats(rat_key_prefix: str, grouped: pd.DataFrame, sla: float) -> None:
684
+ if grouped.empty:
685
+ return
686
+ for code, group in grouped.groupby("code"):
687
+ group = group.sort_values("date_only")
688
+ dates = pd.to_datetime(group["date_only"]).tolist()
689
+ below_flags = (group["value"] < sla).tolist()
690
+ max_streak = 0
691
+ current_streak = 0
692
+ total_below = 0
693
+ last_date = None
694
+ for flag, current_date in zip(below_flags, dates):
695
+ if flag:
696
+ total_below += 1
697
+ if (
698
+ last_date is not None
699
+ and current_date == last_date + timedelta(days=1)
700
+ and current_streak > 0
701
+ ):
702
+ current_streak += 1
703
+ else:
704
+ current_streak = 1
705
+ if current_streak > max_streak:
706
+ max_streak = current_streak
707
+ else:
708
+ current_streak = 0
709
+ last_date = current_date
710
+ stats = site_stats.setdefault(
711
+ code,
712
+ {
713
+ "code": code,
714
+ "max_streak_2g": 0,
715
+ "max_streak_3g": 0,
716
+ "max_streak_lte": 0,
717
+ "below_days_2g": 0,
718
+ "below_days_3g": 0,
719
+ "below_days_lte": 0,
720
+ },
721
+ )
722
+ stats[f"max_streak_{rat_key_prefix}"] = max_streak
723
+ stats[f"below_days_{rat_key_prefix}"] = total_below
724
+
725
+ for rat_col, rat_key, sla in [
726
+ ("2g_tch_avail", "2g", sla_2g),
727
+ ("3g_cell_avail", "3g", sla_3g),
728
+ ("lte_cell_avail", "lte", sla_lte),
729
+ ]:
730
+ if rat_col in work_df.columns:
731
+ g = (
732
+ work_df.dropna(subset=[rat_col])
733
+ .groupby(["code", "date_only"])[rat_col]
734
+ .mean()
735
+ .reset_index()
736
+ )
737
+ g = g.rename(columns={rat_col: "value"})
738
+ _update_stats(rat_key, g, sla)
739
+
740
+ if not site_stats:
741
+ return pd.DataFrame()
742
+
743
+ rows = []
744
+ for code, s in site_stats.items():
745
+ max_2g = s.get("max_streak_2g", 0)
746
+ max_3g = s.get("max_streak_3g", 0)
747
+ max_lte = s.get("max_streak_lte", 0)
748
+ below_2g = s.get("below_days_2g", 0)
749
+ below_3g = s.get("below_days_3g", 0)
750
+ below_lte = s.get("below_days_lte", 0)
751
+ persistent_2g = max_2g >= min_consecutive_days if max_2g else False
752
+ persistent_3g = max_3g >= min_consecutive_days if max_3g else False
753
+ persistent_lte = max_lte >= min_consecutive_days if max_lte else False
754
+ total_below_any = below_2g + below_3g + below_lte
755
+ persistent_any = persistent_2g or persistent_3g or persistent_lte
756
+ rats_persistent_count = sum(
757
+ [persistent_2g is True, persistent_3g is True, persistent_lte is True]
758
+ )
759
+ rows.append(
760
+ {
761
+ "code": code,
762
+ "persistent_issue_2g": persistent_2g,
763
+ "persistent_issue_3g": persistent_3g,
764
+ "persistent_issue_lte": persistent_lte,
765
+ "max_consecutive_days_2g": max_2g,
766
+ "max_consecutive_days_3g": max_3g,
767
+ "max_consecutive_days_lte": max_lte,
768
+ "total_below_days_2g": below_2g,
769
+ "total_below_days_3g": below_3g,
770
+ "total_below_days_lte": below_lte,
771
+ "total_below_days_any": total_below_any,
772
+ "persistent_issue_any": persistent_any,
773
+ "persistent_rats_count": rats_persistent_count,
774
+ }
775
+ )
776
+
777
+ result = pd.DataFrame(rows)
778
+ result = result[result["persistent_issue_any"] == True]
779
+ if result.empty:
780
+ return result
781
+
782
+ if multi_rat_df is not None and not multi_rat_df.empty:
783
+ cols_to_merge = [
784
+ c
785
+ for c in [
786
+ "code",
787
+ "City",
788
+ "post_total_voice_trafic",
789
+ "post_total_data_trafic",
790
+ "post_multirat_status",
791
+ ]
792
+ if c in multi_rat_df.columns
793
+ ]
794
+ if cols_to_merge:
795
+ result = pd.merge(
796
+ result,
797
+ multi_rat_df[cols_to_merge].drop_duplicates("code"),
798
+ on="code",
799
+ how="left",
800
+ )
801
+
802
+ if "post_total_data_trafic" not in result.columns:
803
+ result["post_total_data_trafic"] = 0.0
804
+
805
+ result["criticity_score"] = (
806
+ result["post_total_data_trafic"].fillna(0) * 1.0
807
+ + result["total_below_days_any"].fillna(0) * 100.0
808
+ + result["persistent_rats_count"].fillna(0) * 1000.0
809
+ )
810
+
811
+ result = result.sort_values(
812
+ by=["criticity_score", "total_below_days_any"], ascending=[False, False]
813
+ )
814
+
815
+ return result
816
+
817
+
818
+ def monthly_data_analysis(df: pd.DataFrame):
819
+ df["date"] = pd.to_datetime(df["date"])
820
+ df["month_year"] = df["date"].dt.to_period("M").astype(str)
821
+
822
+ voice_trafic = df.pivot_table(
823
+ index="code",
824
+ columns="month_year",
825
+ values="total_voice_trafic",
826
+ aggfunc="sum",
827
+ fill_value=0,
828
+ )
829
+ voice_trafic = voice_trafic.reindex(sorted(voice_trafic.columns), axis=1)
830
+
831
+ data_trafic = df.pivot_table(
832
+ index="code",
833
+ columns="month_year",
834
+ values="total_data_trafic",
835
+ aggfunc="sum",
836
+ fill_value=0,
837
+ )
838
+ data_trafic = data_trafic.reindex(sorted(data_trafic.columns), axis=1)
839
+
840
+ return voice_trafic, data_trafic
841
+
842
+
843
+ # --------------------------------------------------------------------------------------
844
+ # Global state for drill-down views & export
845
+ # --------------------------------------------------------------------------------------
846
+
847
+ current_full_df: pd.DataFrame | None = None
848
+ current_last_period_df: pd.DataFrame | None = None
849
+ current_analysis_df: pd.DataFrame | None = None
850
+ current_analysis_last_period_df: pd.DataFrame | None = None
851
+
852
+ current_multi_rat_df: pd.DataFrame | None = None
853
+ current_persistent_df: pd.DataFrame | None = None
854
+
855
+ current_site_2g_avail: pd.DataFrame | None = None
856
+ current_site_3g_avail: pd.DataFrame | None = None
857
+ current_site_lte_avail: pd.DataFrame | None = None
858
+
859
+ current_summary_2g_avail: pd.DataFrame | None = None
860
+ current_summary_3g_avail: pd.DataFrame | None = None
861
+ current_summary_lte_avail: pd.DataFrame | None = None
862
+
863
+ current_monthly_voice_df: pd.DataFrame | None = None
864
+ current_monthly_data_df: pd.DataFrame | None = None
865
+ current_sum_pre_post_df: pd.DataFrame | None = None
866
+ current_avg_pre_post_df: pd.DataFrame | None = None
867
+ current_availability_summary_all_df: pd.DataFrame | None = None
868
+
869
+ current_export_multi_rat_df: pd.DataFrame | None = None
870
+ current_export_persistent_df: pd.DataFrame | None = None
871
+ current_export_bytes: bytes | None = None
872
+
873
+
874
+ # --------------------------------------------------------------------------------------
875
+ # Widgets
876
+ # --------------------------------------------------------------------------------------
877
+
878
+ PLOTLY_CONFIG = {"displaylogo": False, "scrollZoom": True, "displayModeBar": True}
879
+
880
+ file_2g = pn.widgets.FileInput(name="2G Traffic Report", accept=".csv,.zip")
881
+ file_3g = pn.widgets.FileInput(name="3G Traffic Report", accept=".csv,.zip")
882
+ file_lte = pn.widgets.FileInput(name="LTE Traffic Report", accept=".csv,.zip")
883
+
884
+ pre_range = pn.widgets.DateRangePicker(name="Pre-period (from - to)")
885
+ post_range = pn.widgets.DateRangePicker(name="Post-period (from - to)")
886
+ last_range = pn.widgets.DateRangePicker(name="Last period (from - to)")
887
+
888
+ sla_2g = pn.widgets.FloatInput(name="2G TCH availability SLA (%)", value=98.0, step=0.1)
889
+ sla_3g = pn.widgets.FloatInput(
890
+ name="3G Cell availability SLA (%)", value=98.0, step=0.1
891
+ )
892
+ sla_lte = pn.widgets.FloatInput(
893
+ name="LTE Cell availability SLA (%)", value=98.0, step=0.1
894
+ )
895
+
896
+ number_of_top_trafic_sites = pn.widgets.IntInput(
897
+ name="Number of top traffic sites", value=25
898
+ )
899
+
900
+ min_persistent_days_widget = pn.widgets.IntInput(
901
+ name="Minimum consecutive days below SLA to flag persistent issue",
902
+ value=3,
903
+ )
904
+
905
+ top_critical_n_widget = pn.widgets.IntInput(
906
+ name="Number of top critical sites to display", value=25
907
+ )
908
+
909
+ run_button = pn.widgets.Button(name="Run analysis", button_type="primary")
910
+
911
+ status_pane = pn.pane.Alert(
912
+ "Upload the 3 reports, select the 3 periods and click 'Run analysis'",
913
+ alert_type="primary",
914
+ )
915
+
916
+ summary_table = pn.widgets.Tabulator(
917
+ height=250,
918
+ sizing_mode="stretch_width",
919
+ layout="fit_data_table",
920
+ )
921
+
922
+ sum_pre_post_table = pn.widgets.Tabulator(
923
+ height=250,
924
+ sizing_mode="stretch_width",
925
+ layout="fit_data_table",
926
+ )
927
+ summary_2g_table = pn.widgets.Tabulator(
928
+ height=250,
929
+ sizing_mode="stretch_width",
930
+ layout="fit_data_table",
931
+ )
932
+ worst_2g_table = pn.widgets.Tabulator(
933
+ height=250,
934
+ sizing_mode="stretch_width",
935
+ layout="fit_data_table",
936
+ )
937
+ summary_3g_table = pn.widgets.Tabulator(
938
+ height=250,
939
+ sizing_mode="stretch_width",
940
+ layout="fit_data_table",
941
+ )
942
+ worst_3g_table = pn.widgets.Tabulator(
943
+ height=250,
944
+ sizing_mode="stretch_width",
945
+ layout="fit_data_table",
946
+ )
947
+ summary_lte_table = pn.widgets.Tabulator(
948
+ height=250,
949
+ sizing_mode="stretch_width",
950
+ layout="fit_data_table",
951
+ )
952
+ worst_lte_table = pn.widgets.Tabulator(
953
+ height=250,
954
+ sizing_mode="stretch_width",
955
+ layout="fit_data_table",
956
+ )
957
+ multi_rat_table = pn.widgets.Tabulator(
958
+ height=250,
959
+ sizing_mode="stretch_width",
960
+ layout="fit_data_table",
961
+ )
962
+ persistent_table = pn.widgets.Tabulator(
963
+ height=250,
964
+ sizing_mode="stretch_width",
965
+ layout="fit_data_table",
966
+ )
967
+
968
+ site_select = pn.widgets.Select(name="Select a site for detailed view", options={})
969
+ site_traffic_plot = pn.pane.Plotly(
970
+ height=400,
971
+ sizing_mode="stretch_width",
972
+ config=PLOTLY_CONFIG,
973
+ )
974
+ site_avail_plot = pn.pane.Plotly(
975
+ height=400,
976
+ sizing_mode="stretch_width",
977
+ config=PLOTLY_CONFIG,
978
+ )
979
+ site_degraded_table = pn.widgets.Tabulator(
980
+ height=200,
981
+ sizing_mode="stretch_width",
982
+ layout="fit_data_table",
983
+ )
984
+
985
+ city_select = pn.widgets.Select(name="Select a City for aggregated view", options=[])
986
+ city_traffic_plot = pn.pane.Plotly(
987
+ height=400,
988
+ sizing_mode="stretch_width",
989
+ config=PLOTLY_CONFIG,
990
+ )
991
+ city_avail_plot = pn.pane.Plotly(
992
+ height=400,
993
+ sizing_mode="stretch_width",
994
+ config=PLOTLY_CONFIG,
995
+ )
996
+ city_degraded_table = pn.widgets.Tabulator(
997
+ height=200,
998
+ sizing_mode="stretch_width",
999
+ layout="fit_data_table",
1000
+ )
1001
+
1002
+ daily_avail_plot = pn.pane.Plotly(
1003
+ height=400,
1004
+ sizing_mode="stretch_width",
1005
+ config=PLOTLY_CONFIG,
1006
+ )
1007
+ daily_degraded_table = pn.widgets.Tabulator(
1008
+ height=200,
1009
+ sizing_mode="stretch_width",
1010
+ layout="fit_data_table",
1011
+ )
1012
+
1013
+ top_data_sites_table = pn.widgets.Tabulator(
1014
+ height=250,
1015
+ sizing_mode="stretch_width",
1016
+ layout="fit_data_table",
1017
+ )
1018
+ top_voice_sites_table = pn.widgets.Tabulator(
1019
+ height=250,
1020
+ sizing_mode="stretch_width",
1021
+ layout="fit_data_table",
1022
+ )
1023
+ top_data_bar_plot = pn.pane.Plotly(
1024
+ height=400,
1025
+ sizing_mode="stretch_width",
1026
+ config=PLOTLY_CONFIG,
1027
+ )
1028
+ top_voice_bar_plot = pn.pane.Plotly(
1029
+ height=400,
1030
+ sizing_mode="stretch_width",
1031
+ config=PLOTLY_CONFIG,
1032
+ )
1033
+ data_map_plot = pn.pane.Plotly(
1034
+ height=500,
1035
+ sizing_mode="stretch_width",
1036
+ config=PLOTLY_CONFIG,
1037
+ )
1038
+ voice_map_plot = pn.pane.Plotly(
1039
+ height=500,
1040
+ sizing_mode="stretch_width",
1041
+ config=PLOTLY_CONFIG,
1042
+ )
1043
+
1044
+ # Shared pane used inside the fullscreen modal
1045
+ fullscreen_plot = pn.pane.Plotly(
1046
+ sizing_mode="stretch_both",
1047
+ min_height=700,
1048
+ config=PLOTLY_CONFIG,
1049
+ )
1050
+
1051
+ # Fullscreen buttons for each Plotly plot
1052
+ site_traffic_fullscreen_btn = pn.widgets.Button(
1053
+ name="Full screen site traffic", button_type="default"
1054
+ )
1055
+ site_avail_fullscreen_btn = pn.widgets.Button(
1056
+ name="Full screen site availability", button_type="default"
1057
+ )
1058
+ city_traffic_fullscreen_btn = pn.widgets.Button(
1059
+ name="Full screen city traffic", button_type="default"
1060
+ )
1061
+ city_avail_fullscreen_btn = pn.widgets.Button(
1062
+ name="Full screen city availability", button_type="default"
1063
+ )
1064
+ daily_avail_fullscreen_btn = pn.widgets.Button(
1065
+ name="Full screen daily availability", button_type="default"
1066
+ )
1067
+ top_data_fullscreen_btn = pn.widgets.Button(
1068
+ name="Full screen top data bar", button_type="default"
1069
+ )
1070
+ top_voice_fullscreen_btn = pn.widgets.Button(
1071
+ name="Full screen top voice bar", button_type="default"
1072
+ )
1073
+ data_map_fullscreen_btn = pn.widgets.Button(
1074
+ name="Full screen data map", button_type="default"
1075
+ )
1076
+ voice_map_fullscreen_btn = pn.widgets.Button(
1077
+ name="Full screen voice map", button_type="default"
1078
+ )
1079
+
1080
+ multi_rat_download = pn.widgets.FileDownload(
1081
+ label="Download Multi-RAT table (CSV)",
1082
+ filename="multi_rat_availability.csv",
1083
+ button_type="default",
1084
+ )
1085
+
1086
+ persistent_download = pn.widgets.FileDownload(
1087
+ label="Download persistent issues (CSV)",
1088
+ filename="persistent_issues.csv",
1089
+ button_type="default",
1090
+ )
1091
+
1092
+ top_data_download = pn.widgets.FileDownload(
1093
+ label="Download top data sites (CSV)",
1094
+ filename="top_data_sites.csv",
1095
+ button_type="default",
1096
+ )
1097
+
1098
+ top_voice_download = pn.widgets.FileDownload(
1099
+ label="Download top voice sites (CSV)",
1100
+ filename="top_voice_sites.csv",
1101
+ button_type="default",
1102
+ )
1103
+
1104
+ export_button = pn.widgets.FileDownload(
1105
+ label="Download the Analysis Report",
1106
+ filename="Global_Trafic_Analysis_Report.xlsx",
1107
+ button_type="primary",
1108
+ )
1109
+
1110
+
1111
+ # --------------------------------------------------------------------------------------
1112
+ # Callback
1113
+ # --------------------------------------------------------------------------------------
1114
+
1115
+
1116
+ def _validate_date_range(rng: tuple[date, date] | list[date], label: str) -> None:
1117
+ if not rng or len(rng) != 2:
1118
+ raise ValueError(f"Please select 2 dates for {label}.")
1119
+ if rng[0] is None or rng[1] is None:
1120
+ raise ValueError(f"Please select valid dates for {label}.")
1121
+
1122
+
1123
+ def run_analysis(event=None): # event param required by on_click
1124
+ try:
1125
+ status_pane.object = "Running analysis..."
1126
+ status_pane.alert_type = "primary"
1127
+
1128
+ global current_full_df, current_last_period_df
1129
+ global current_analysis_df, current_analysis_last_period_df
1130
+ global current_multi_rat_df, current_persistent_df
1131
+ global current_site_2g_avail, current_site_3g_avail, current_site_lte_avail
1132
+ global current_summary_2g_avail, current_summary_3g_avail, current_summary_lte_avail
1133
+ global current_monthly_voice_df, current_monthly_data_df
1134
+ global current_sum_pre_post_df, current_avg_pre_post_df
1135
+ global current_availability_summary_all_df
1136
+ global current_export_multi_rat_df, current_export_persistent_df
1137
+ global current_export_bytes
1138
+
1139
+ # Basic validations
1140
+ if not (file_2g.value and file_3g.value and file_lte.value):
1141
+ raise ValueError("Please upload all 3 traffic reports (2G, 3G, LTE).")
1142
+
1143
+ _validate_date_range(pre_range.value, "pre-period")
1144
+ _validate_date_range(post_range.value, "post-period")
1145
+ _validate_date_range(last_range.value, "last period")
1146
+
1147
+ # Simple check on overlapping pre/post (same logic as Streamlit version, but lighter)
1148
+ pre_start, pre_end = pre_range.value
1149
+ post_start, post_end = post_range.value
1150
+ if pre_start == post_start and pre_end == post_end:
1151
+ raise ValueError("Pre and post periods are the same.")
1152
+ if pre_start < post_start and pre_end > post_end:
1153
+ raise ValueError("Pre and post periods are overlapping.")
1154
+
1155
+ df_2g = read_fileinput_to_df(file_2g)
1156
+ df_3g = read_fileinput_to_df(file_3g)
1157
+ df_lte = read_fileinput_to_df(file_lte)
1158
+
1159
+ if df_2g is None or df_3g is None or df_lte is None:
1160
+ raise ValueError("Failed to read one or more input files.")
1161
+
1162
+ summary = pd.DataFrame(
1163
+ {
1164
+ "Dataset": ["2G", "3G", "LTE"],
1165
+ "Rows": [len(df_2g), len(df_3g), len(df_lte)],
1166
+ "Columns": [df_2g.shape[1], df_3g.shape[1], df_lte.shape[1]],
1167
+ }
1168
+ )
1169
+ summary_table.value = summary
1170
+
1171
+ df_2g_clean = preprocess_2g(df_2g)
1172
+ df_3g_clean = preprocess_3g(df_3g)
1173
+ df_lte_clean = preprocess_lte(df_lte)
1174
+
1175
+ full_df, last_period, sum_pre_post_analysis, avg_pre_post_analysis = (
1176
+ merge_and_compare(
1177
+ df_2g_clean,
1178
+ df_3g_clean,
1179
+ df_lte_clean,
1180
+ pre_range.value,
1181
+ post_range.value,
1182
+ last_range.value,
1183
+ )
1184
+ )
1185
+
1186
+ monthly_voice_df, monthly_data_df = monthly_data_analysis(full_df)
1187
+
1188
+ analysis_df = full_df
1189
+
1190
+ # Persist global state for later drill-down / export
1191
+ current_full_df = full_df
1192
+ current_last_period_df = last_period
1193
+ current_analysis_df = analysis_df
1194
+ current_analysis_last_period_df = last_period
1195
+ current_monthly_voice_df = monthly_voice_df
1196
+ current_monthly_data_df = monthly_data_df
1197
+ current_sum_pre_post_df = sum_pre_post_analysis
1198
+ current_avg_pre_post_df = avg_pre_post_analysis
1199
+
1200
+ sum_pre_post_table.value = sum_pre_post_analysis
1201
+
1202
+ summary_2g_avail, site_2g_avail = analyze_2g_availability(
1203
+ analysis_df, float(sla_2g.value)
1204
+ )
1205
+ if summary_2g_avail is not None:
1206
+ summary_2g_table.value = summary_2g_avail.round(2)
1207
+ worst_sites_2g = site_2g_avail.sort_values("tch_avail_post").head(25)
1208
+ worst_2g_table.value = worst_sites_2g.round(2)
1209
+ else:
1210
+ summary_2g_table.value = pd.DataFrame()
1211
+ worst_2g_table.value = pd.DataFrame()
1212
+
1213
+ current_summary_2g_avail = summary_2g_avail
1214
+ current_site_2g_avail = site_2g_avail if summary_2g_avail is not None else None
1215
+
1216
+ summary_3g_avail, site_3g_avail = analyze_3g_availability(
1217
+ analysis_df, float(sla_3g.value)
1218
+ )
1219
+ if summary_3g_avail is not None:
1220
+ summary_3g_table.value = summary_3g_avail.round(2)
1221
+ worst_sites_3g = site_3g_avail.sort_values("cell_avail_post").head(25)
1222
+ worst_3g_table.value = worst_sites_3g.round(2)
1223
+ else:
1224
+ summary_3g_table.value = pd.DataFrame()
1225
+ worst_3g_table.value = pd.DataFrame()
1226
+
1227
+ current_summary_3g_avail = summary_3g_avail
1228
+ current_site_3g_avail = site_3g_avail if summary_3g_avail is not None else None
1229
+
1230
+ summary_lte_avail, site_lte_avail = analyze_lte_availability(
1231
+ analysis_df, float(sla_lte.value)
1232
+ )
1233
+ if summary_lte_avail is not None:
1234
+ summary_lte_table.value = summary_lte_avail.round(2)
1235
+ worst_sites_lte = site_lte_avail.sort_values("lte_avail_post").head(25)
1236
+ worst_lte_table.value = worst_sites_lte.round(2)
1237
+ else:
1238
+ summary_lte_table.value = pd.DataFrame()
1239
+ worst_lte_table.value = pd.DataFrame()
1240
+
1241
+ current_summary_lte_avail = summary_lte_avail
1242
+ current_site_lte_avail = (
1243
+ site_lte_avail if summary_lte_avail is not None else None
1244
+ )
1245
+
1246
+ # Build availability summary across RATs for export
1247
+ availability_frames = []
1248
+ if summary_2g_avail is not None:
1249
+ tmp = summary_2g_avail.copy()
1250
+ tmp["RAT"] = "2G"
1251
+ availability_frames.append(tmp)
1252
+ if summary_3g_avail is not None:
1253
+ tmp = summary_3g_avail.copy()
1254
+ tmp["RAT"] = "3G"
1255
+ availability_frames.append(tmp)
1256
+ if summary_lte_avail is not None:
1257
+ tmp = summary_lte_avail.copy()
1258
+ tmp["RAT"] = "LTE"
1259
+ availability_frames.append(tmp)
1260
+
1261
+ current_availability_summary_all_df = (
1262
+ pd.concat(availability_frames, ignore_index=True)
1263
+ if availability_frames
1264
+ else pd.DataFrame()
1265
+ )
1266
+
1267
+ multi_rat_df = analyze_multirat_availability(
1268
+ analysis_df,
1269
+ float(sla_2g.value),
1270
+ float(sla_3g.value),
1271
+ float(sla_lte.value),
1272
+ )
1273
+ if multi_rat_df is not None:
1274
+ multi_rat_table.value = multi_rat_df.round(2)
1275
+ else:
1276
+ multi_rat_table.value = pd.DataFrame()
1277
+
1278
+ current_multi_rat_df = multi_rat_df if multi_rat_df is not None else None
1279
+
1280
+ # Persistent availability (UI uses configurable threshold, export keeps 3 days)
1281
+ persistent_df = pd.DataFrame()
1282
+ if multi_rat_df is not None:
1283
+ persistent_df = analyze_persistent_availability(
1284
+ analysis_df,
1285
+ multi_rat_df,
1286
+ float(sla_2g.value),
1287
+ float(sla_3g.value),
1288
+ float(sla_lte.value),
1289
+ int(min_persistent_days_widget.value),
1290
+ )
1291
+
1292
+ current_persistent_df = (
1293
+ persistent_df
1294
+ if persistent_df is not None and not persistent_df.empty
1295
+ else None
1296
+ )
1297
+
1298
+ # Export-specific multi-RAT & persistent (based on full_df as in Streamlit app)
1299
+ export_multi_rat_base = analyze_multirat_availability(
1300
+ full_df,
1301
+ float(sla_2g.value),
1302
+ float(sla_3g.value),
1303
+ float(sla_lte.value),
1304
+ )
1305
+ current_export_multi_rat_df = (
1306
+ export_multi_rat_base
1307
+ if export_multi_rat_base is not None
1308
+ else pd.DataFrame()
1309
+ )
1310
+
1311
+ export_persistent_tmp = pd.DataFrame()
1312
+ if export_multi_rat_base is not None:
1313
+ export_persistent_tmp = analyze_persistent_availability(
1314
+ full_df,
1315
+ export_multi_rat_base,
1316
+ float(sla_2g.value),
1317
+ float(sla_3g.value),
1318
+ float(sla_lte.value),
1319
+ 3,
1320
+ )
1321
+ current_export_persistent_df = (
1322
+ export_persistent_tmp
1323
+ if export_persistent_tmp is not None and not export_persistent_tmp.empty
1324
+ else pd.DataFrame()
1325
+ )
1326
+
1327
+ # Precompute export bytes so the download button is instant
1328
+ current_export_bytes = _build_export_bytes()
1329
+
1330
+ # Update all drill-down & map views
1331
+ _update_site_controls()
1332
+ _update_city_controls()
1333
+ _update_daily_availability_view()
1334
+ _update_top_sites_and_maps()
1335
+ _update_persistent_table_view()
1336
+
1337
+ status_pane.alert_type = "success"
1338
+ status_pane.object = "Analysis completed."
1339
+
1340
+ except Exception as exc: # noqa: BLE001
1341
+ status_pane.alert_type = "danger"
1342
+ status_pane.object = f"Error: {exc}"
1343
+
1344
+
1345
+ run_button.on_click(run_analysis)
1346
+
1347
+
1348
+ def _update_site_controls() -> None:
1349
+ """Populate site selection widget based on current_analysis_df and refresh view."""
1350
+ if current_analysis_df is None or current_analysis_df.empty:
1351
+ site_select.options = {}
1352
+ site_select.value = None
1353
+ site_traffic_plot.object = None
1354
+ site_avail_plot.object = None
1355
+ site_degraded_table.value = pd.DataFrame()
1356
+ return
1357
+
1358
+ sites_df = (
1359
+ current_analysis_df[["code", "City"]]
1360
+ .drop_duplicates()
1361
+ .sort_values(by=["City", "code"])
1362
+ )
1363
+
1364
+ options: dict[str, int] = {}
1365
+ for _, row in sites_df.iterrows():
1366
+ label = (
1367
+ f"{row['City']}_{row['code']}"
1368
+ if pd.notna(row["City"])
1369
+ else str(row["code"])
1370
+ )
1371
+ options[label] = int(row["code"])
1372
+
1373
+ site_select.options = options
1374
+ if options and site_select.value not in options.values():
1375
+ # When options is a dict, Select.value is the mapped value (code)
1376
+ site_select.value = next(iter(options.values()))
1377
+
1378
+ _update_site_view()
1379
+
1380
+
1381
+ def _update_site_view(event=None) -> None: # noqa: D401, ARG001
1382
+ """Update site drill-down plots and table from current_analysis_df and site_select."""
1383
+ if current_analysis_df is None or current_analysis_df.empty:
1384
+ site_traffic_plot.object = None
1385
+ site_avail_plot.object = None
1386
+ site_degraded_table.value = pd.DataFrame()
1387
+ return
1388
+
1389
+ selected_code = site_select.value
1390
+ if selected_code is None:
1391
+ site_traffic_plot.object = None
1392
+ site_avail_plot.object = None
1393
+ site_degraded_table.value = pd.DataFrame()
1394
+ return
1395
+
1396
+ site_detail_df = current_analysis_df[
1397
+ current_analysis_df["code"] == int(selected_code)
1398
+ ].copy()
1399
+ if site_detail_df.empty:
1400
+ site_traffic_plot.object = None
1401
+ site_avail_plot.object = None
1402
+ site_degraded_table.value = pd.DataFrame()
1403
+ return
1404
+
1405
+ site_detail_df = site_detail_df.sort_values("date")
1406
+
1407
+ # Traffic over time
1408
+ traffic_cols = [
1409
+ col
1410
+ for col in ["total_voice_trafic", "total_data_trafic"]
1411
+ if col in site_detail_df.columns
1412
+ ]
1413
+ if traffic_cols:
1414
+ traffic_long = site_detail_df[["date"] + traffic_cols].melt(
1415
+ id_vars="date",
1416
+ value_vars=traffic_cols,
1417
+ var_name="metric",
1418
+ value_name="value",
1419
+ )
1420
+ fig_traffic = px.line(
1421
+ traffic_long,
1422
+ x="date",
1423
+ y="value",
1424
+ color="metric",
1425
+ color_discrete_sequence=px.colors.qualitative.Plotly,
1426
+ )
1427
+ fig_traffic.update_layout(
1428
+ template="plotly_white",
1429
+ plot_bgcolor="white",
1430
+ paper_bgcolor="white",
1431
+ )
1432
+ site_traffic_plot.object = fig_traffic
1433
+ else:
1434
+ site_traffic_plot.object = None
1435
+
1436
+ # Availability over time per RAT
1437
+ avail_cols: list[str] = []
1438
+ rename_map: dict[str, str] = {}
1439
+ if "2g_tch_avail" in site_detail_df.columns:
1440
+ avail_cols.append("2g_tch_avail")
1441
+ rename_map["2g_tch_avail"] = "2G"
1442
+ if "3g_cell_avail" in site_detail_df.columns:
1443
+ avail_cols.append("3g_cell_avail")
1444
+ rename_map["3g_cell_avail"] = "3G"
1445
+ if "lte_cell_avail" in site_detail_df.columns:
1446
+ avail_cols.append("lte_cell_avail")
1447
+ rename_map["lte_cell_avail"] = "LTE"
1448
+
1449
+ if avail_cols:
1450
+ avail_df = site_detail_df[["date"] + avail_cols].copy()
1451
+ avail_df = avail_df.rename(columns=rename_map)
1452
+ value_cols = [c for c in avail_df.columns if c != "date"]
1453
+ avail_long = avail_df.melt(
1454
+ id_vars="date",
1455
+ value_vars=value_cols,
1456
+ var_name="RAT",
1457
+ value_name="availability",
1458
+ )
1459
+ fig_avail = px.line(
1460
+ avail_long,
1461
+ x="date",
1462
+ y="availability",
1463
+ color="RAT",
1464
+ color_discrete_sequence=px.colors.qualitative.Plotly,
1465
+ )
1466
+ fig_avail.update_layout(
1467
+ template="plotly_white",
1468
+ plot_bgcolor="white",
1469
+ paper_bgcolor="white",
1470
+ )
1471
+ site_avail_plot.object = fig_avail
1472
+
1473
+ # Days with availability below SLA per RAT
1474
+ site_detail_df["date_only"] = site_detail_df["date"].dt.date
1475
+ degraded_rows_site: list[dict] = []
1476
+ for rat_col, rat_name, sla_value in [
1477
+ ("2g_tch_avail", "2G", float(sla_2g.value)),
1478
+ ("3g_cell_avail", "3G", float(sla_3g.value)),
1479
+ ("lte_cell_avail", "LTE", float(sla_lte.value)),
1480
+ ]:
1481
+ if rat_col in site_detail_df.columns:
1482
+ daily_site = (
1483
+ site_detail_df.groupby("date_only")[rat_col].mean().dropna()
1484
+ )
1485
+ mask = daily_site < sla_value
1486
+ for d, val in daily_site[mask].items():
1487
+ degraded_rows_site.append(
1488
+ {
1489
+ "RAT": rat_name,
1490
+ "date": d,
1491
+ "avg_availability": val,
1492
+ "SLA": sla_value,
1493
+ }
1494
+ )
1495
+ if degraded_rows_site:
1496
+ degraded_site_df = pd.DataFrame(degraded_rows_site)
1497
+ site_degraded_table.value = degraded_site_df.round(2)
1498
+ else:
1499
+ site_degraded_table.value = pd.DataFrame()
1500
+ else:
1501
+ site_avail_plot.object = None
1502
+ site_degraded_table.value = pd.DataFrame()
1503
+
1504
+
1505
+ def _update_city_controls() -> None:
1506
+ """Populate city selection widget based on current_analysis_df and refresh view."""
1507
+ if current_analysis_df is None or current_analysis_df.empty:
1508
+ city_select.options = []
1509
+ city_select.value = None
1510
+ city_traffic_plot.object = None
1511
+ city_avail_plot.object = None
1512
+ city_degraded_table.value = pd.DataFrame()
1513
+ return
1514
+
1515
+ if (
1516
+ "City" not in current_analysis_df.columns
1517
+ or not current_analysis_df["City"].notna().any()
1518
+ ):
1519
+ city_select.options = []
1520
+ city_select.value = None
1521
+ city_traffic_plot.object = None
1522
+ city_avail_plot.object = pd.DataFrame()
1523
+ city_degraded_table.value = pd.DataFrame()
1524
+ return
1525
+
1526
+ cities_df = (
1527
+ current_analysis_df[["City"]].dropna().drop_duplicates().sort_values(by="City")
1528
+ )
1529
+ options = cities_df["City"].tolist()
1530
+ city_select.options = options
1531
+ if options and city_select.value not in options:
1532
+ city_select.value = options[0]
1533
+
1534
+ _update_city_view()
1535
+
1536
+
1537
+ def _update_city_view(event=None) -> None: # noqa: D401, ARG001
1538
+ """Update city drill-down plots and degraded days table based on city_select."""
1539
+ if current_analysis_df is None or current_analysis_df.empty:
1540
+ city_traffic_plot.object = None
1541
+ city_avail_plot.object = None
1542
+ city_degraded_table.value = pd.DataFrame()
1543
+ return
1544
+
1545
+ selected_city = city_select.value
1546
+ if not selected_city:
1547
+ city_traffic_plot.object = None
1548
+ city_avail_plot.object = None
1549
+ city_degraded_table.value = pd.DataFrame()
1550
+ return
1551
+
1552
+ city_detail_df = current_analysis_df[
1553
+ current_analysis_df["City"] == selected_city
1554
+ ].copy()
1555
+ if city_detail_df.empty:
1556
+ city_traffic_plot.object = None
1557
+ city_avail_plot.object = None
1558
+ city_degraded_table.value = pd.DataFrame()
1559
+ return
1560
+
1561
+ city_detail_df = city_detail_df.sort_values("date")
1562
+
1563
+ # Traffic aggregated at city level
1564
+ traffic_cols_city = [
1565
+ col
1566
+ for col in ["total_voice_trafic", "total_data_trafic"]
1567
+ if col in city_detail_df.columns
1568
+ ]
1569
+ if traffic_cols_city:
1570
+ city_traffic = (
1571
+ city_detail_df.groupby("date")[traffic_cols_city].sum().reset_index()
1572
+ )
1573
+ traffic_long_city = city_traffic.melt(
1574
+ id_vars="date",
1575
+ value_vars=traffic_cols_city,
1576
+ var_name="metric",
1577
+ value_name="value",
1578
+ )
1579
+ fig_traffic_city = px.line(
1580
+ traffic_long_city,
1581
+ x="date",
1582
+ y="value",
1583
+ color="metric",
1584
+ color_discrete_sequence=px.colors.qualitative.Plotly,
1585
+ )
1586
+ fig_traffic_city.update_layout(
1587
+ template="plotly_white",
1588
+ plot_bgcolor="white",
1589
+ paper_bgcolor="white",
1590
+ )
1591
+ city_traffic_plot.object = fig_traffic_city
1592
+ else:
1593
+ city_traffic_plot.object = None
1594
+
1595
+ # Availability aggregated at city level
1596
+ avail_cols_city: list[str] = []
1597
+ rename_map_city: dict[str, str] = {}
1598
+ if "2g_tch_avail" in city_detail_df.columns:
1599
+ avail_cols_city.append("2g_tch_avail")
1600
+ rename_map_city["2g_tch_avail"] = "2G"
1601
+ if "3g_cell_avail" in city_detail_df.columns:
1602
+ avail_cols_city.append("3g_cell_avail")
1603
+ rename_map_city["3g_cell_avail"] = "3G"
1604
+ if "lte_cell_avail" in city_detail_df.columns:
1605
+ avail_cols_city.append("lte_cell_avail")
1606
+ rename_map_city["lte_cell_avail"] = "LTE"
1607
+
1608
+ if avail_cols_city:
1609
+ avail_city_df = city_detail_df[["date"] + avail_cols_city].copy()
1610
+ avail_city_df = avail_city_df.rename(columns=rename_map_city)
1611
+ value_cols_city = [c for c in avail_city_df.columns if c != "date"]
1612
+ avail_long_city = avail_city_df.melt(
1613
+ id_vars="date",
1614
+ value_vars=value_cols_city,
1615
+ var_name="RAT",
1616
+ value_name="availability",
1617
+ )
1618
+ fig_avail_city = px.line(
1619
+ avail_long_city,
1620
+ x="date",
1621
+ y="availability",
1622
+ color="RAT",
1623
+ color_discrete_sequence=px.colors.qualitative.Plotly,
1624
+ )
1625
+ fig_avail_city.update_layout(
1626
+ template="plotly_white",
1627
+ plot_bgcolor="white",
1628
+ paper_bgcolor="white",
1629
+ )
1630
+ city_avail_plot.object = fig_avail_city
1631
+
1632
+ city_detail_df["date_only"] = city_detail_df["date"].dt.date
1633
+ degraded_rows_city: list[dict] = []
1634
+ for rat_col, rat_name, sla_value in [
1635
+ ("2g_tch_avail", "2G", float(sla_2g.value)),
1636
+ ("3g_cell_avail", "3G", float(sla_3g.value)),
1637
+ ("lte_cell_avail", "LTE", float(sla_lte.value)),
1638
+ ]:
1639
+ if rat_col in city_detail_df.columns:
1640
+ daily_city = (
1641
+ city_detail_df.groupby("date_only")[rat_col].mean().dropna()
1642
+ )
1643
+ mask_city = daily_city < sla_value
1644
+ for d, val in daily_city[mask_city].items():
1645
+ degraded_rows_city.append(
1646
+ {
1647
+ "RAT": rat_name,
1648
+ "date": d,
1649
+ "avg_availability": val,
1650
+ "SLA": sla_value,
1651
+ }
1652
+ )
1653
+ if degraded_rows_city:
1654
+ degraded_city_df = pd.DataFrame(degraded_rows_city)
1655
+ city_degraded_table.value = degraded_city_df.round(2)
1656
+ else:
1657
+ city_degraded_table.value = pd.DataFrame()
1658
+ else:
1659
+ city_avail_plot.object = None
1660
+ city_degraded_table.value = pd.DataFrame()
1661
+
1662
+
1663
+ def _update_daily_availability_view() -> None:
1664
+ """Daily average availability per RAT over the full analysis_df."""
1665
+ if current_analysis_df is None or current_analysis_df.empty:
1666
+ daily_avail_plot.object = None
1667
+ daily_degraded_table.value = pd.DataFrame()
1668
+ return
1669
+
1670
+ temp_df = current_analysis_df.copy()
1671
+ if not any(
1672
+ col in temp_df.columns
1673
+ for col in ["2g_tch_avail", "3g_cell_avail", "lte_cell_avail"]
1674
+ ):
1675
+ daily_avail_plot.object = None
1676
+ daily_degraded_table.value = pd.DataFrame()
1677
+ return
1678
+
1679
+ temp_df["date_only"] = temp_df["date"].dt.date
1680
+
1681
+ agg_dict: dict[str, str] = {}
1682
+ if "2g_tch_avail" in temp_df.columns:
1683
+ agg_dict["2g_tch_avail"] = "mean"
1684
+ if "3g_cell_avail" in temp_df.columns:
1685
+ agg_dict["3g_cell_avail"] = "mean"
1686
+ if "lte_cell_avail" in temp_df.columns:
1687
+ agg_dict["lte_cell_avail"] = "mean"
1688
+
1689
+ daily_avail = (
1690
+ temp_df.groupby("date_only", as_index=False).agg(agg_dict)
1691
+ if agg_dict
1692
+ else pd.DataFrame()
1693
+ )
1694
+
1695
+ if daily_avail.empty:
1696
+ daily_avail_plot.object = None
1697
+ daily_degraded_table.value = pd.DataFrame()
1698
+ return
1699
+
1700
+ rename_map: dict[str, str] = {}
1701
+ if "2g_tch_avail" in daily_avail.columns:
1702
+ rename_map["2g_tch_avail"] = "2G"
1703
+ if "3g_cell_avail" in daily_avail.columns:
1704
+ rename_map["3g_cell_avail"] = "3G"
1705
+ if "lte_cell_avail" in daily_avail.columns:
1706
+ rename_map["lte_cell_avail"] = "LTE"
1707
+
1708
+ daily_avail = daily_avail.rename(columns=rename_map)
1709
+
1710
+ value_cols = [c for c in daily_avail.columns if c != "date_only"]
1711
+ if not value_cols:
1712
+ daily_avail_plot.object = None
1713
+ daily_degraded_table.value = pd.DataFrame()
1714
+ return
1715
+
1716
+ daily_melt = daily_avail.melt(
1717
+ id_vars="date_only",
1718
+ value_vars=value_cols,
1719
+ var_name="RAT",
1720
+ value_name="availability",
1721
+ )
1722
+
1723
+ fig = px.line(
1724
+ daily_melt,
1725
+ x="date_only",
1726
+ y="availability",
1727
+ color="RAT",
1728
+ markers=True,
1729
+ color_discrete_sequence=px.colors.qualitative.Plotly,
1730
+ )
1731
+ fig.update_layout(
1732
+ template="plotly_white",
1733
+ plot_bgcolor="white",
1734
+ paper_bgcolor="white",
1735
+ )
1736
+ daily_avail_plot.object = fig
1737
+
1738
+ degraded_rows: list[dict] = []
1739
+ for rat_name, sla_value in [
1740
+ ("2G", float(sla_2g.value)),
1741
+ ("3G", float(sla_3g.value)),
1742
+ ("LTE", float(sla_lte.value)),
1743
+ ]:
1744
+ if rat_name in daily_avail.columns:
1745
+ series = daily_avail[rat_name]
1746
+ mask = series < sla_value
1747
+ for d, val in zip(daily_avail.loc[mask, "date_only"], series[mask]):
1748
+ degraded_rows.append(
1749
+ {
1750
+ "RAT": rat_name,
1751
+ "date": d,
1752
+ "avg_availability": val,
1753
+ "SLA": sla_value,
1754
+ }
1755
+ )
1756
+
1757
+ if degraded_rows:
1758
+ degraded_df = pd.DataFrame(degraded_rows)
1759
+ daily_degraded_table.value = degraded_df.round(2)
1760
+ else:
1761
+ daily_degraded_table.value = pd.DataFrame()
1762
+
1763
+
1764
+ def _update_top_sites_and_maps() -> None:
1765
+ """Top traffic sites and geographic maps based on last analysis period."""
1766
+ if current_analysis_last_period_df is None or current_analysis_last_period_df.empty:
1767
+ top_data_sites_table.value = pd.DataFrame()
1768
+ top_voice_sites_table.value = pd.DataFrame()
1769
+ top_data_bar_plot.object = None
1770
+ top_voice_bar_plot.object = None
1771
+ data_map_plot.object = None
1772
+ voice_map_plot.object = None
1773
+ return
1774
+
1775
+ df = current_analysis_last_period_df
1776
+ n = int(number_of_top_trafic_sites.value or 25)
1777
+
1778
+ # Top sites by data traffic
1779
+ top_sites = (
1780
+ df.groupby(["code", "City"])["total_data_trafic"]
1781
+ .sum()
1782
+ .sort_values(ascending=False)
1783
+ .head(n)
1784
+ )
1785
+ top_data_sites_table.value = top_sites.sort_values(ascending=True).reset_index()
1786
+
1787
+ fig_data = px.bar(
1788
+ top_sites.reset_index(),
1789
+ y=top_sites.reset_index()[["City", "code"]].agg(
1790
+ lambda x: "_".join(map(str, x)), axis=1
1791
+ ),
1792
+ x="total_data_trafic",
1793
+ title=f"Top {n} sites by data traffic",
1794
+ orientation="h",
1795
+ text="total_data_trafic",
1796
+ color_discrete_sequence=px.colors.qualitative.Plotly,
1797
+ )
1798
+ fig_data.update_layout(
1799
+ template="plotly_white",
1800
+ plot_bgcolor="white",
1801
+ paper_bgcolor="white",
1802
+ )
1803
+ top_data_bar_plot.object = fig_data
1804
+
1805
+ # Top sites by voice traffic
1806
+ top_sites_voice = (
1807
+ df.groupby(["code", "City"])["total_voice_trafic"]
1808
+ .sum()
1809
+ .sort_values(ascending=False)
1810
+ .head(n)
1811
+ )
1812
+ top_voice_sites_table.value = top_sites_voice.sort_values(
1813
+ ascending=True
1814
+ ).reset_index()
1815
+
1816
+ fig_voice = px.bar(
1817
+ top_sites_voice.reset_index(),
1818
+ y=top_sites_voice.reset_index()[["City", "code"]].agg(
1819
+ lambda x: "_".join(map(str, x)), axis=1
1820
+ ),
1821
+ x="total_voice_trafic",
1822
+ title=f"Top {n} sites by voice traffic",
1823
+ orientation="h",
1824
+ text="total_voice_trafic",
1825
+ color_discrete_sequence=px.colors.qualitative.Plotly,
1826
+ )
1827
+ fig_voice.update_layout(
1828
+ template="plotly_white",
1829
+ plot_bgcolor="white",
1830
+ paper_bgcolor="white",
1831
+ )
1832
+ top_voice_bar_plot.object = fig_voice
1833
+
1834
+ # Maps
1835
+ if {"Latitude", "Longitude"}.issubset(df.columns):
1836
+ min_size = 5
1837
+ max_size = 40
1838
+
1839
+ # Data traffic map
1840
+ df_data = (
1841
+ df.groupby(["code", "City", "Latitude", "Longitude"])["total_data_trafic"]
1842
+ .sum()
1843
+ .reset_index()
1844
+ )
1845
+ if not df_data.empty:
1846
+ traffic_data_min = df_data["total_data_trafic"].min()
1847
+ traffic_data_max = df_data["total_data_trafic"].max()
1848
+ if traffic_data_max > traffic_data_min:
1849
+ df_data["bubble_size"] = df_data["total_data_trafic"].apply(
1850
+ lambda x: min_size
1851
+ + (max_size - min_size)
1852
+ * (x - traffic_data_min)
1853
+ / (traffic_data_max - traffic_data_min)
1854
+ )
1855
+ else:
1856
+ df_data["bubble_size"] = min_size
1857
+
1858
+ custom_blue_red = [
1859
+ [0.0, "#4292c6"],
1860
+ [0.2, "#2171b5"],
1861
+ [0.4, "#084594"],
1862
+ [0.6, "#cb181d"],
1863
+ [0.8, "#a50f15"],
1864
+ [1.0, "#67000d"],
1865
+ ]
1866
+
1867
+ fig_map_data = px.scatter_map(
1868
+ df_data,
1869
+ lat="Latitude",
1870
+ lon="Longitude",
1871
+ color="total_data_trafic",
1872
+ size="bubble_size",
1873
+ color_continuous_scale=custom_blue_red,
1874
+ size_max=max_size,
1875
+ zoom=10,
1876
+ height=600,
1877
+ title="Data traffic distribution",
1878
+ hover_data={"code": True, "total_data_trafic": True},
1879
+ hover_name="code",
1880
+ text=[str(x) for x in df_data["code"]],
1881
+ )
1882
+ fig_map_data.update_layout(
1883
+ mapbox_style="open-street-map",
1884
+ coloraxis_colorbar=dict(title="Total Data Traffic (MB)"),
1885
+ coloraxis=dict(cmin=traffic_data_min, cmax=traffic_data_max),
1886
+ font=dict(size=10, color="black"),
1887
+ )
1888
+ data_map_plot.object = fig_map_data
1889
+ else:
1890
+ data_map_plot.object = None
1891
+
1892
+ # Voice traffic map
1893
+ df_voice = (
1894
+ df.groupby(["code", "City", "Latitude", "Longitude"])["total_voice_trafic"]
1895
+ .sum()
1896
+ .reset_index()
1897
+ )
1898
+ if not df_voice.empty:
1899
+ traffic_voice_min = df_voice["total_voice_trafic"].min()
1900
+ traffic_voice_max = df_voice["total_voice_trafic"].max()
1901
+ if traffic_voice_max > traffic_voice_min:
1902
+ df_voice["bubble_size"] = df_voice["total_voice_trafic"].apply(
1903
+ lambda x: min_size
1904
+ + (max_size - min_size)
1905
+ * (x - traffic_voice_min)
1906
+ / (traffic_voice_max - traffic_voice_min)
1907
+ )
1908
+ else:
1909
+ df_voice["bubble_size"] = min_size
1910
+
1911
+ custom_blue_red = [
1912
+ [0.0, "#4292c6"],
1913
+ [0.2, "#2171b5"],
1914
+ [0.4, "#084594"],
1915
+ [0.6, "#cb181d"],
1916
+ [0.8, "#a50f15"],
1917
+ [1.0, "#67000d"],
1918
+ ]
1919
+
1920
+ fig_map_voice = px.scatter_map(
1921
+ df_voice,
1922
+ lat="Latitude",
1923
+ lon="Longitude",
1924
+ color="total_voice_trafic",
1925
+ size="bubble_size",
1926
+ color_continuous_scale=custom_blue_red,
1927
+ size_max=max_size,
1928
+ zoom=10,
1929
+ height=600,
1930
+ title="Voice traffic distribution",
1931
+ hover_data={"code": True, "total_voice_trafic": True},
1932
+ hover_name="code",
1933
+ text=[str(x) for x in df_voice["code"]],
1934
+ )
1935
+ fig_map_voice.update_layout(
1936
+ mapbox_style="open-street-map",
1937
+ coloraxis_colorbar=dict(title="Total Voice Traffic (MB)"),
1938
+ coloraxis=dict(cmin=traffic_voice_min, cmax=traffic_voice_max),
1939
+ font=dict(size=10, color="black"),
1940
+ )
1941
+ voice_map_plot.object = fig_map_voice
1942
+ else:
1943
+ voice_map_plot.object = None
1944
+ else:
1945
+ data_map_plot.object = None
1946
+ voice_map_plot.object = None
1947
+
1948
+
1949
+ def _update_persistent_table_view(event=None) -> None: # noqa: D401, ARG001
1950
+ """Update persistent issues table based on current_persistent_df and top_critical_n."""
1951
+ if current_persistent_df is None or current_persistent_df.empty:
1952
+ persistent_table.value = pd.DataFrame()
1953
+ return
1954
+
1955
+ n = int(top_critical_n_widget.value or 25)
1956
+ persistent_table.value = current_persistent_df.head(n).round(2)
1957
+
1958
+
1959
+ def _recompute_persistent_from_widget(event=None) -> None: # noqa: ARG001
1960
+ """Recompute persistent issues when the minimum consecutive days widget changes."""
1961
+ global current_persistent_df
1962
+
1963
+ if (
1964
+ current_analysis_df is None
1965
+ or current_analysis_df.empty
1966
+ or current_multi_rat_df is None
1967
+ or current_multi_rat_df.empty
1968
+ ):
1969
+ current_persistent_df = None
1970
+ persistent_table.value = pd.DataFrame()
1971
+ return
1972
+
1973
+ persistent_df = analyze_persistent_availability(
1974
+ current_analysis_df,
1975
+ current_multi_rat_df,
1976
+ float(sla_2g.value),
1977
+ float(sla_3g.value),
1978
+ float(sla_lte.value),
1979
+ int(min_persistent_days_widget.value),
1980
+ )
1981
+
1982
+ current_persistent_df = (
1983
+ persistent_df if persistent_df is not None and not persistent_df.empty else None
1984
+ )
1985
+ _update_persistent_table_view()
1986
+
1987
+
1988
+ def _build_export_bytes() -> bytes:
1989
+ """Build Excel report bytes mirroring the Streamlit export structure."""
1990
+ if current_full_df is None:
1991
+ return b""
1992
+
1993
+ dfs: list[pd.DataFrame] = [
1994
+ current_full_df,
1995
+ (
1996
+ current_sum_pre_post_df
1997
+ if current_sum_pre_post_df is not None
1998
+ else pd.DataFrame()
1999
+ ),
2000
+ (
2001
+ current_avg_pre_post_df
2002
+ if current_avg_pre_post_df is not None
2003
+ else pd.DataFrame()
2004
+ ),
2005
+ (
2006
+ current_monthly_voice_df
2007
+ if current_monthly_voice_df is not None
2008
+ else pd.DataFrame()
2009
+ ),
2010
+ (
2011
+ current_monthly_data_df
2012
+ if current_monthly_data_df is not None
2013
+ else pd.DataFrame()
2014
+ ),
2015
+ (
2016
+ current_availability_summary_all_df
2017
+ if current_availability_summary_all_df is not None
2018
+ else pd.DataFrame()
2019
+ ),
2020
+ current_site_2g_avail if current_site_2g_avail is not None else pd.DataFrame(),
2021
+ current_site_3g_avail if current_site_3g_avail is not None else pd.DataFrame(),
2022
+ (
2023
+ current_site_lte_avail
2024
+ if current_site_lte_avail is not None
2025
+ else pd.DataFrame()
2026
+ ),
2027
+ (
2028
+ current_export_multi_rat_df
2029
+ if current_export_multi_rat_df is not None
2030
+ else pd.DataFrame()
2031
+ ),
2032
+ (
2033
+ current_export_persistent_df
2034
+ if current_export_persistent_df is not None
2035
+ else pd.DataFrame()
2036
+ ),
2037
+ ]
2038
+
2039
+ sheet_names = [
2040
+ "Global_Trafic_Analysis",
2041
+ "Sum_pre_post_analysis",
2042
+ "Avg_pre_post_analysis",
2043
+ "Monthly_voice_analysis",
2044
+ "Monthly_data_analysis",
2045
+ "Availability_Summary_All_RAT",
2046
+ "TwoG_Availability_By_Site",
2047
+ "ThreeG_Availability_By_Site",
2048
+ "LTE_Availability_By_Site",
2049
+ "MultiRAT_Availability_By_Site",
2050
+ "Top_Critical_Sites",
2051
+ ]
2052
+
2053
+ return write_dfs_to_excel(dfs, sheet_names, index=True)
2054
+
2055
+
2056
+ def _export_callback() -> bytes:
2057
+ # Use cached bytes from the last completed analysis to make download instant
2058
+ data = current_export_bytes or b""
2059
+ if not data:
2060
+ return io.BytesIO()
2061
+ # FileDownload expects a file path or file-like object, not raw bytes
2062
+ return io.BytesIO(data)
2063
+
2064
+
2065
+ def _df_to_csv_bytes(df: pd.DataFrame | None) -> io.BytesIO:
2066
+ if df is None or getattr(df, "empty", True): # handles None and empty DataFrame
2067
+ return io.BytesIO()
2068
+ return io.BytesIO(df.to_csv(index=False).encode("utf-8"))
2069
+
2070
+
2071
+ def _download_multi_rat_table() -> io.BytesIO:
2072
+ value = getattr(multi_rat_table, "value", None)
2073
+ return _df_to_csv_bytes(value if isinstance(value, pd.DataFrame) else None)
2074
+
2075
+
2076
+ def _download_persistent_table() -> io.BytesIO:
2077
+ value = getattr(persistent_table, "value", None)
2078
+ return _df_to_csv_bytes(value if isinstance(value, pd.DataFrame) else None)
2079
+
2080
+
2081
+ def _download_top_data_sites() -> io.BytesIO:
2082
+ value = getattr(top_data_sites_table, "value", None)
2083
+ return _df_to_csv_bytes(value if isinstance(value, pd.DataFrame) else None)
2084
+
2085
+
2086
+ def _download_top_voice_sites() -> io.BytesIO:
2087
+ value = getattr(top_voice_sites_table, "value", None)
2088
+ return _df_to_csv_bytes(value if isinstance(value, pd.DataFrame) else None)
2089
+
2090
+
2091
+ def _open_fullscreen_from_pane(plot_pane: pn.pane.Plotly, title: str) -> None:
2092
+ """Open the given plot in the template modal as fullscreen view."""
2093
+ if plot_pane.object is None:
2094
+ return
2095
+
2096
+ fullscreen_plot.object = plot_pane.object
2097
+ content = pn.Column(
2098
+ pn.pane.Markdown(f"### {title}"),
2099
+ fullscreen_plot,
2100
+ sizing_mode="stretch_both",
2101
+ styles={"width": "95vw", "height": "90vh"},
2102
+ )
2103
+
2104
+ if "template" not in globals():
2105
+ return
2106
+
2107
+ # Always populate modal content first
2108
+ if hasattr(template, "modal"):
2109
+ try:
2110
+ template.modal[:] = [content]
2111
+ except Exception: # noqa: BLE001
2112
+ try:
2113
+ template.modal.clear()
2114
+ template.modal.append(content)
2115
+ except Exception: # noqa: BLE001
2116
+ pass
2117
+
2118
+ # Preferred API on templates
2119
+ if hasattr(template, "open_modal"):
2120
+ template.open_modal()
2121
+ return
2122
+
2123
+ # Fallbacks across versions
2124
+ if hasattr(template, "modal") and hasattr(template.modal, "open"):
2125
+ template.modal.open = True
2126
+ if hasattr(template, "modal") and hasattr(template.modal, "visible"):
2127
+ template.modal.visible = True
2128
+
2129
+
2130
+ def _on_site_traffic_fullscreen(event=None) -> None: # noqa: ARG001
2131
+ _open_fullscreen_from_pane(site_traffic_plot, "Site traffic over time")
2132
+
2133
+
2134
+ def _on_site_avail_fullscreen(event=None) -> None: # noqa: ARG001
2135
+ _open_fullscreen_from_pane(site_avail_plot, "Site availability over time")
2136
+
2137
+
2138
+ def _on_city_traffic_fullscreen(event=None) -> None: # noqa: ARG001
2139
+ _open_fullscreen_from_pane(city_traffic_plot, "City traffic over time")
2140
+
2141
+
2142
+ def _on_city_avail_fullscreen(event=None) -> None: # noqa: ARG001
2143
+ _open_fullscreen_from_pane(city_avail_plot, "City availability over time")
2144
+
2145
+
2146
+ def _on_daily_avail_fullscreen(event=None) -> None: # noqa: ARG001
2147
+ _open_fullscreen_from_pane(daily_avail_plot, "Daily average availability per RAT")
2148
+
2149
+
2150
+ def _on_top_data_fullscreen(event=None) -> None: # noqa: ARG001
2151
+ _open_fullscreen_from_pane(top_data_bar_plot, "Top sites by data traffic")
2152
+
2153
+
2154
+ def _on_top_voice_fullscreen(event=None) -> None: # noqa: ARG001
2155
+ _open_fullscreen_from_pane(top_voice_bar_plot, "Top sites by voice traffic")
2156
+
2157
+
2158
+ def _on_data_map_fullscreen(event=None) -> None: # noqa: ARG001
2159
+ _open_fullscreen_from_pane(data_map_plot, "Data traffic map")
2160
+
2161
+
2162
+ def _on_voice_map_fullscreen(event=None) -> None: # noqa: ARG001
2163
+ _open_fullscreen_from_pane(voice_map_plot, "Voice traffic map")
2164
+
2165
+
2166
+ # Reactive bindings for drill-down controls & export
2167
+ site_select.param.watch(_update_site_view, "value")
2168
+ city_select.param.watch(_update_city_view, "value")
2169
+ top_critical_n_widget.param.watch(_update_persistent_table_view, "value")
2170
+ number_of_top_trafic_sites.param.watch(_update_top_sites_and_maps, "value")
2171
+ min_persistent_days_widget.param.watch(_recompute_persistent_from_widget, "value")
2172
+
2173
+ export_button.callback = _export_callback
2174
+ multi_rat_download.callback = _download_multi_rat_table
2175
+ persistent_download.callback = _download_persistent_table
2176
+ top_data_download.callback = _download_top_data_sites
2177
+ top_voice_download.callback = _download_top_voice_sites
2178
+
2179
+ site_traffic_fullscreen_btn.on_click(_on_site_traffic_fullscreen)
2180
+ site_avail_fullscreen_btn.on_click(_on_site_avail_fullscreen)
2181
+ city_traffic_fullscreen_btn.on_click(_on_city_traffic_fullscreen)
2182
+ city_avail_fullscreen_btn.on_click(_on_city_avail_fullscreen)
2183
+ daily_avail_fullscreen_btn.on_click(_on_daily_avail_fullscreen)
2184
+ top_data_fullscreen_btn.on_click(_on_top_data_fullscreen)
2185
+ top_voice_fullscreen_btn.on_click(_on_top_voice_fullscreen)
2186
+ data_map_fullscreen_btn.on_click(_on_data_map_fullscreen)
2187
+ voice_map_fullscreen_btn.on_click(_on_voice_map_fullscreen)
2188
+
2189
+
2190
+ # --------------------------------------------------------------------------------------
2191
+ # Material Template layout
2192
+ # --------------------------------------------------------------------------------------
2193
+
2194
+
2195
+ template = pn.template.MaterialTemplate(
2196
+ title="📊 Global Trafic Analysis - Panel (2G / 3G / LTE)",
2197
+ )
2198
+
2199
+ # Ensure the template modal is large enough for fullscreen charts
2200
+ template.modal.sizing_mode = "stretch_both"
2201
+ template.modal.styles = {
2202
+ "width": "95vw",
2203
+ "height": "90vh",
2204
+ "maxWidth": "95vw",
2205
+ "maxHeight": "90vh",
2206
+ }
2207
+
2208
+ sidebar_content = pn.Column(
2209
+ """This Panel app is a migration of the existing Streamlit-based global traffic analysis.
2210
+
2211
+ Upload the 3 traffic reports (2G / 3G / LTE), configure the analysis periods and SLAs, then run the analysis.
2212
+
2213
+ In this first step, the app only validates the pipeline and shows a lightweight summary of the inputs.\nFull KPIs and visualizations will be added progressively.""",
2214
+ "---",
2215
+ file_2g,
2216
+ file_3g,
2217
+ file_lte,
2218
+ "---",
2219
+ pre_range,
2220
+ post_range,
2221
+ last_range,
2222
+ "---",
2223
+ sla_2g,
2224
+ sla_3g,
2225
+ sla_lte,
2226
+ "---",
2227
+ number_of_top_trafic_sites,
2228
+ min_persistent_days_widget,
2229
+ top_critical_n_widget,
2230
+ "---",
2231
+ run_button,
2232
+ )
2233
+
2234
+ main_content = pn.Column(
2235
+ status_pane,
2236
+ pn.pane.Markdown("## Input datasets summary"),
2237
+ summary_table,
2238
+ pn.layout.Divider(),
2239
+ pn.pane.Markdown("## Summary Analysis Pre / Post"),
2240
+ sum_pre_post_table,
2241
+ pn.layout.Divider(),
2242
+ pn.pane.Markdown("## Availability vs SLA (per RAT)"),
2243
+ pn.Tabs(
2244
+ (
2245
+ "2G",
2246
+ pn.Column(
2247
+ summary_2g_table, pn.pane.Markdown("Worst 25 sites"), worst_2g_table
2248
+ ),
2249
+ ),
2250
+ (
2251
+ "3G",
2252
+ pn.Column(
2253
+ summary_3g_table, pn.pane.Markdown("Worst 25 sites"), worst_3g_table
2254
+ ),
2255
+ ),
2256
+ (
2257
+ "LTE",
2258
+ pn.Column(
2259
+ summary_lte_table, pn.pane.Markdown("Worst 25 sites"), worst_lte_table
2260
+ ),
2261
+ ),
2262
+ ),
2263
+ pn.layout.Divider(),
2264
+ pn.pane.Markdown("## Multi-RAT Availability (post-period)"),
2265
+ multi_rat_table,
2266
+ multi_rat_download,
2267
+ pn.layout.Divider(),
2268
+ pn.pane.Markdown("## Persistent availability issues (critical sites)"),
2269
+ persistent_table,
2270
+ persistent_download,
2271
+ pn.layout.Divider(),
2272
+ pn.pane.Markdown("## Site drill-down: traffic and availability over time"),
2273
+ site_select,
2274
+ site_traffic_plot,
2275
+ site_traffic_fullscreen_btn,
2276
+ site_avail_plot,
2277
+ site_avail_fullscreen_btn,
2278
+ site_degraded_table,
2279
+ pn.layout.Divider(),
2280
+ pn.pane.Markdown("## City drill-down: traffic and availability over time"),
2281
+ city_select,
2282
+ city_traffic_plot,
2283
+ city_traffic_fullscreen_btn,
2284
+ city_avail_plot,
2285
+ city_avail_fullscreen_btn,
2286
+ city_degraded_table,
2287
+ pn.layout.Divider(),
2288
+ pn.pane.Markdown("## Daily average availability per RAT"),
2289
+ daily_avail_plot,
2290
+ daily_avail_fullscreen_btn,
2291
+ daily_degraded_table,
2292
+ pn.layout.Divider(),
2293
+ pn.pane.Markdown("## Top traffic sites and geographic maps (last period)"),
2294
+ pn.Row(
2295
+ pn.Column(
2296
+ pn.pane.Markdown("### Top sites by data traffic"),
2297
+ top_data_sites_table,
2298
+ top_data_download,
2299
+ top_data_bar_plot,
2300
+ top_data_fullscreen_btn,
2301
+ ),
2302
+ pn.Column(
2303
+ pn.pane.Markdown("### Top sites by voice traffic"),
2304
+ top_voice_sites_table,
2305
+ top_voice_download,
2306
+ top_voice_bar_plot,
2307
+ top_voice_fullscreen_btn,
2308
+ ),
2309
+ ),
2310
+ pn.Row(
2311
+ pn.Column(
2312
+ pn.pane.Markdown("### Data traffic map"),
2313
+ data_map_plot,
2314
+ data_map_fullscreen_btn,
2315
+ ),
2316
+ pn.Column(
2317
+ pn.pane.Markdown("### Voice traffic map"),
2318
+ voice_map_plot,
2319
+ voice_map_fullscreen_btn,
2320
+ ),
2321
+ ),
2322
+ pn.layout.Divider(),
2323
+ pn.pane.Markdown("## Export"),
2324
+ export_button,
2325
+ )
2326
+
2327
+ template.sidebar.append(sidebar_content)
2328
+ template.main.append(main_content)
2329
+
2330
+
2331
+ template.servable()